Test Report: KVM_Linux_crio 19450

                    
                      8d898ab9c8ea504736c6a6ac30beb8b93591f909:2024-08-15:35798
                    
                

Test fail (33/312)

Order failed test Duration
34 TestAddons/parallel/Ingress 215.44
36 TestAddons/parallel/MetricsServer 349.39
45 TestAddons/StoppedEnableDisable 154.4
128 TestFunctional/parallel/ImageCommands/ImageRemove 3.25
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 4.59
164 TestMultiControlPlane/serial/StopSecondaryNode 141.78
166 TestMultiControlPlane/serial/RestartSecondaryNode 58.51
168 TestMultiControlPlane/serial/RestartClusterKeepsNodes 380.83
171 TestMultiControlPlane/serial/StopCluster 141.77
231 TestMultiNode/serial/RestartKeepsNodes 328.32
233 TestMultiNode/serial/StopMultiNode 141.4
240 TestPreload 274.89
248 TestKubernetesUpgrade 419.05
268 TestPause/serial/SecondStartNoReconfiguration 10.61
284 TestStartStop/group/old-k8s-version/serial/FirstStart 297.27
294 TestStartStop/group/no-preload/serial/Stop 139.07
297 TestStartStop/group/embed-certs/serial/Stop 139.12
300 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.09
301 TestStartStop/group/old-k8s-version/serial/DeployApp 0.47
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 114.81
303 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
305 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
311 TestStartStop/group/old-k8s-version/serial/SecondStart 740.39
312 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.22
313 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.11
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.2
315 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.42
316 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 501.29
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 415.35
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 378.31
319 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 100.55
x
+
TestAddons/parallel/Ingress (215.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-973562 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-973562 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-973562 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f83b1404-c3f9-436f-a4fa-c82dd8ac7b90] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f83b1404-c3f9-436f-a4fa-c82dd8ac7b90] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 1m13.004127107s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-973562 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.616969892s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-973562 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.200
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-973562 addons disable ingress-dns --alsologtostderr -v=1: (1.160966104s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-973562 addons disable ingress --alsologtostderr -v=1: (7.69371612s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-973562 -n addons-973562
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-973562 logs -n 25: (1.174965746s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-709194                                                                     | download-only-709194 | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC | 15 Aug 24 17:06 UTC |
	| delete  | -p download-only-379390                                                                     | download-only-379390 | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC | 15 Aug 24 17:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-174247 | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC |                     |
	|         | binary-mirror-174247                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41239                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-174247                                                                     | binary-mirror-174247 | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC | 15 Aug 24 17:06 UTC |
	| addons  | enable dashboard -p                                                                         | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC |                     |
	|         | addons-973562                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC |                     |
	|         | addons-973562                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-973562 --wait=true                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC | 15 Aug 24 17:09 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-973562 ssh cat                                                                       | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | /opt/local-path-provisioner/pvc-a475e29f-cfc6-4625-8bed-59ac85b175a1_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:11 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-973562 ip                                                                            | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | -p addons-973562                                                                            |                      |         |         |                     |                     |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | addons-973562                                                                               |                      |         |         |                     |                     |
	| addons  | addons-973562 addons                                                                        | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:11 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:11 UTC | 15 Aug 24 17:11 UTC |
	|         | addons-973562                                                                               |                      |         |         |                     |                     |
	| addons  | addons-973562 addons                                                                        | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:11 UTC | 15 Aug 24 17:11 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:11 UTC | 15 Aug 24 17:11 UTC |
	|         | -p addons-973562                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:12 UTC | 15 Aug 24 17:12 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-973562 ssh curl -s                                                                   | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:12 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-973562 ip                                                                            | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:14 UTC | 15 Aug 24 17:14 UTC |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:14 UTC | 15 Aug 24 17:14 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:14 UTC | 15 Aug 24 17:14 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:06:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:06:25.300617   21063 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:06:25.300876   21063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:06:25.300886   21063 out.go:358] Setting ErrFile to fd 2...
	I0815 17:06:25.300890   21063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:06:25.301072   21063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:06:25.301633   21063 out.go:352] Setting JSON to false
	I0815 17:06:25.302421   21063 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2931,"bootTime":1723738654,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:06:25.302472   21063 start.go:139] virtualization: kvm guest
	I0815 17:06:25.304709   21063 out.go:177] * [addons-973562] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:06:25.305859   21063 notify.go:220] Checking for updates...
	I0815 17:06:25.305898   21063 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:06:25.307151   21063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:06:25.308452   21063 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:06:25.309693   21063 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:06:25.310870   21063 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:06:25.311955   21063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:06:25.313273   21063 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:06:25.343867   21063 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 17:06:25.345406   21063 start.go:297] selected driver: kvm2
	I0815 17:06:25.345427   21063 start.go:901] validating driver "kvm2" against <nil>
	I0815 17:06:25.345438   21063 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:06:25.346089   21063 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:06:25.346151   21063 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 17:06:25.360253   21063 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 17:06:25.360304   21063 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:06:25.360548   21063 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:06:25.360630   21063 cni.go:84] Creating CNI manager for ""
	I0815 17:06:25.360647   21063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 17:06:25.360661   21063 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:06:25.360739   21063 start.go:340] cluster config:
	{Name:addons-973562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-973562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:06:25.360844   21063 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:06:25.362675   21063 out.go:177] * Starting "addons-973562" primary control-plane node in "addons-973562" cluster
	I0815 17:06:25.364055   21063 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:06:25.364086   21063 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:06:25.364105   21063 cache.go:56] Caching tarball of preloaded images
	I0815 17:06:25.364203   21063 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:06:25.364237   21063 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:06:25.364614   21063 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/config.json ...
	I0815 17:06:25.364638   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/config.json: {Name:mkb53d52d787f17d133a7c9739d3e174f96bcdf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:25.364774   21063 start.go:360] acquireMachinesLock for addons-973562: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:06:25.364817   21063 start.go:364] duration metric: took 30.636µs to acquireMachinesLock for "addons-973562"
	I0815 17:06:25.364833   21063 start.go:93] Provisioning new machine with config: &{Name:addons-973562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-973562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:06:25.364902   21063 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 17:06:25.366501   21063 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0815 17:06:25.366689   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:06:25.366733   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:06:25.380543   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I0815 17:06:25.380941   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:06:25.381487   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:06:25.381505   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:06:25.381817   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:06:25.381985   21063 main.go:141] libmachine: (addons-973562) Calling .GetMachineName
	I0815 17:06:25.382137   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:25.382262   21063 start.go:159] libmachine.API.Create for "addons-973562" (driver="kvm2")
	I0815 17:06:25.382283   21063 client.go:168] LocalClient.Create starting
	I0815 17:06:25.382312   21063 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem
	I0815 17:06:25.517440   21063 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem
	I0815 17:06:25.758135   21063 main.go:141] libmachine: Running pre-create checks...
	I0815 17:06:25.758162   21063 main.go:141] libmachine: (addons-973562) Calling .PreCreateCheck
	I0815 17:06:25.758620   21063 main.go:141] libmachine: (addons-973562) Calling .GetConfigRaw
	I0815 17:06:25.759012   21063 main.go:141] libmachine: Creating machine...
	I0815 17:06:25.759026   21063 main.go:141] libmachine: (addons-973562) Calling .Create
	I0815 17:06:25.759162   21063 main.go:141] libmachine: (addons-973562) Creating KVM machine...
	I0815 17:06:25.760400   21063 main.go:141] libmachine: (addons-973562) DBG | found existing default KVM network
	I0815 17:06:25.761287   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:25.761135   21085 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0815 17:06:25.761322   21063 main.go:141] libmachine: (addons-973562) DBG | created network xml: 
	I0815 17:06:25.761338   21063 main.go:141] libmachine: (addons-973562) DBG | <network>
	I0815 17:06:25.761346   21063 main.go:141] libmachine: (addons-973562) DBG |   <name>mk-addons-973562</name>
	I0815 17:06:25.761358   21063 main.go:141] libmachine: (addons-973562) DBG |   <dns enable='no'/>
	I0815 17:06:25.761370   21063 main.go:141] libmachine: (addons-973562) DBG |   
	I0815 17:06:25.761379   21063 main.go:141] libmachine: (addons-973562) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 17:06:25.761389   21063 main.go:141] libmachine: (addons-973562) DBG |     <dhcp>
	I0815 17:06:25.761394   21063 main.go:141] libmachine: (addons-973562) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 17:06:25.761403   21063 main.go:141] libmachine: (addons-973562) DBG |     </dhcp>
	I0815 17:06:25.761410   21063 main.go:141] libmachine: (addons-973562) DBG |   </ip>
	I0815 17:06:25.761416   21063 main.go:141] libmachine: (addons-973562) DBG |   
	I0815 17:06:25.761423   21063 main.go:141] libmachine: (addons-973562) DBG | </network>
	I0815 17:06:25.761433   21063 main.go:141] libmachine: (addons-973562) DBG | 
	I0815 17:06:25.766318   21063 main.go:141] libmachine: (addons-973562) DBG | trying to create private KVM network mk-addons-973562 192.168.39.0/24...
	I0815 17:06:25.827958   21063 main.go:141] libmachine: (addons-973562) DBG | private KVM network mk-addons-973562 192.168.39.0/24 created
	I0815 17:06:25.827983   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:25.827913   21085 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:06:25.827991   21063 main.go:141] libmachine: (addons-973562) Setting up store path in /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562 ...
	I0815 17:06:25.828003   21063 main.go:141] libmachine: (addons-973562) Building disk image from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 17:06:25.828060   21063 main.go:141] libmachine: (addons-973562) Downloading /home/jenkins/minikube-integration/19450-13013/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 17:06:26.073809   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:26.073693   21085 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa...
	I0815 17:06:26.207228   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:26.207070   21085 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/addons-973562.rawdisk...
	I0815 17:06:26.207263   21063 main.go:141] libmachine: (addons-973562) DBG | Writing magic tar header
	I0815 17:06:26.207279   21063 main.go:141] libmachine: (addons-973562) DBG | Writing SSH key tar header
	I0815 17:06:26.207289   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:26.207227   21085 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562 ...
	I0815 17:06:26.207831   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562
	I0815 17:06:26.207856   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines
	I0815 17:06:26.207869   21063 main.go:141] libmachine: (addons-973562) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562 (perms=drwx------)
	I0815 17:06:26.207884   21063 main.go:141] libmachine: (addons-973562) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines (perms=drwxr-xr-x)
	I0815 17:06:26.207894   21063 main.go:141] libmachine: (addons-973562) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube (perms=drwxr-xr-x)
	I0815 17:06:26.207907   21063 main.go:141] libmachine: (addons-973562) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013 (perms=drwxrwxr-x)
	I0815 17:06:26.207917   21063 main.go:141] libmachine: (addons-973562) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 17:06:26.207929   21063 main.go:141] libmachine: (addons-973562) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 17:06:26.207937   21063 main.go:141] libmachine: (addons-973562) Creating domain...
	I0815 17:06:26.207950   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:06:26.207961   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013
	I0815 17:06:26.207973   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 17:06:26.207982   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home/jenkins
	I0815 17:06:26.207990   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home
	I0815 17:06:26.208001   21063 main.go:141] libmachine: (addons-973562) DBG | Skipping /home - not owner
	I0815 17:06:26.208963   21063 main.go:141] libmachine: (addons-973562) define libvirt domain using xml: 
	I0815 17:06:26.208986   21063 main.go:141] libmachine: (addons-973562) <domain type='kvm'>
	I0815 17:06:26.208994   21063 main.go:141] libmachine: (addons-973562)   <name>addons-973562</name>
	I0815 17:06:26.208999   21063 main.go:141] libmachine: (addons-973562)   <memory unit='MiB'>4000</memory>
	I0815 17:06:26.209004   21063 main.go:141] libmachine: (addons-973562)   <vcpu>2</vcpu>
	I0815 17:06:26.209009   21063 main.go:141] libmachine: (addons-973562)   <features>
	I0815 17:06:26.209014   21063 main.go:141] libmachine: (addons-973562)     <acpi/>
	I0815 17:06:26.209024   21063 main.go:141] libmachine: (addons-973562)     <apic/>
	I0815 17:06:26.209032   21063 main.go:141] libmachine: (addons-973562)     <pae/>
	I0815 17:06:26.209039   21063 main.go:141] libmachine: (addons-973562)     
	I0815 17:06:26.209048   21063 main.go:141] libmachine: (addons-973562)   </features>
	I0815 17:06:26.209055   21063 main.go:141] libmachine: (addons-973562)   <cpu mode='host-passthrough'>
	I0815 17:06:26.209062   21063 main.go:141] libmachine: (addons-973562)   
	I0815 17:06:26.209077   21063 main.go:141] libmachine: (addons-973562)   </cpu>
	I0815 17:06:26.209082   21063 main.go:141] libmachine: (addons-973562)   <os>
	I0815 17:06:26.209087   21063 main.go:141] libmachine: (addons-973562)     <type>hvm</type>
	I0815 17:06:26.209093   21063 main.go:141] libmachine: (addons-973562)     <boot dev='cdrom'/>
	I0815 17:06:26.209097   21063 main.go:141] libmachine: (addons-973562)     <boot dev='hd'/>
	I0815 17:06:26.209102   21063 main.go:141] libmachine: (addons-973562)     <bootmenu enable='no'/>
	I0815 17:06:26.209106   21063 main.go:141] libmachine: (addons-973562)   </os>
	I0815 17:06:26.209130   21063 main.go:141] libmachine: (addons-973562)   <devices>
	I0815 17:06:26.209147   21063 main.go:141] libmachine: (addons-973562)     <disk type='file' device='cdrom'>
	I0815 17:06:26.209159   21063 main.go:141] libmachine: (addons-973562)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/boot2docker.iso'/>
	I0815 17:06:26.209167   21063 main.go:141] libmachine: (addons-973562)       <target dev='hdc' bus='scsi'/>
	I0815 17:06:26.209176   21063 main.go:141] libmachine: (addons-973562)       <readonly/>
	I0815 17:06:26.209183   21063 main.go:141] libmachine: (addons-973562)     </disk>
	I0815 17:06:26.209189   21063 main.go:141] libmachine: (addons-973562)     <disk type='file' device='disk'>
	I0815 17:06:26.209197   21063 main.go:141] libmachine: (addons-973562)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 17:06:26.209207   21063 main.go:141] libmachine: (addons-973562)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/addons-973562.rawdisk'/>
	I0815 17:06:26.209213   21063 main.go:141] libmachine: (addons-973562)       <target dev='hda' bus='virtio'/>
	I0815 17:06:26.209269   21063 main.go:141] libmachine: (addons-973562)     </disk>
	I0815 17:06:26.209311   21063 main.go:141] libmachine: (addons-973562)     <interface type='network'>
	I0815 17:06:26.209327   21063 main.go:141] libmachine: (addons-973562)       <source network='mk-addons-973562'/>
	I0815 17:06:26.209339   21063 main.go:141] libmachine: (addons-973562)       <model type='virtio'/>
	I0815 17:06:26.209351   21063 main.go:141] libmachine: (addons-973562)     </interface>
	I0815 17:06:26.209368   21063 main.go:141] libmachine: (addons-973562)     <interface type='network'>
	I0815 17:06:26.209384   21063 main.go:141] libmachine: (addons-973562)       <source network='default'/>
	I0815 17:06:26.209394   21063 main.go:141] libmachine: (addons-973562)       <model type='virtio'/>
	I0815 17:06:26.209405   21063 main.go:141] libmachine: (addons-973562)     </interface>
	I0815 17:06:26.209415   21063 main.go:141] libmachine: (addons-973562)     <serial type='pty'>
	I0815 17:06:26.209427   21063 main.go:141] libmachine: (addons-973562)       <target port='0'/>
	I0815 17:06:26.209438   21063 main.go:141] libmachine: (addons-973562)     </serial>
	I0815 17:06:26.209451   21063 main.go:141] libmachine: (addons-973562)     <console type='pty'>
	I0815 17:06:26.209462   21063 main.go:141] libmachine: (addons-973562)       <target type='serial' port='0'/>
	I0815 17:06:26.209478   21063 main.go:141] libmachine: (addons-973562)     </console>
	I0815 17:06:26.209488   21063 main.go:141] libmachine: (addons-973562)     <rng model='virtio'>
	I0815 17:06:26.209498   21063 main.go:141] libmachine: (addons-973562)       <backend model='random'>/dev/random</backend>
	I0815 17:06:26.209509   21063 main.go:141] libmachine: (addons-973562)     </rng>
	I0815 17:06:26.209518   21063 main.go:141] libmachine: (addons-973562)     
	I0815 17:06:26.209528   21063 main.go:141] libmachine: (addons-973562)     
	I0815 17:06:26.209538   21063 main.go:141] libmachine: (addons-973562)   </devices>
	I0815 17:06:26.209548   21063 main.go:141] libmachine: (addons-973562) </domain>
	I0815 17:06:26.209557   21063 main.go:141] libmachine: (addons-973562) 
	I0815 17:06:26.216618   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:e9:78:fa in network default
	I0815 17:06:26.217167   21063 main.go:141] libmachine: (addons-973562) Ensuring networks are active...
	I0815 17:06:26.217213   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:26.217766   21063 main.go:141] libmachine: (addons-973562) Ensuring network default is active
	I0815 17:06:26.218072   21063 main.go:141] libmachine: (addons-973562) Ensuring network mk-addons-973562 is active
	I0815 17:06:26.219410   21063 main.go:141] libmachine: (addons-973562) Getting domain xml...
	I0815 17:06:26.220238   21063 main.go:141] libmachine: (addons-973562) Creating domain...
	I0815 17:06:27.621615   21063 main.go:141] libmachine: (addons-973562) Waiting to get IP...
	I0815 17:06:27.622275   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:27.622613   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:27.622675   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:27.622602   21085 retry.go:31] will retry after 276.809251ms: waiting for machine to come up
	I0815 17:06:27.901064   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:27.901555   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:27.901579   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:27.901508   21085 retry.go:31] will retry after 273.714625ms: waiting for machine to come up
	I0815 17:06:28.176976   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:28.177518   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:28.177547   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:28.177467   21085 retry.go:31] will retry after 425.434844ms: waiting for machine to come up
	I0815 17:06:28.603974   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:28.604406   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:28.604428   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:28.604345   21085 retry.go:31] will retry after 416.967692ms: waiting for machine to come up
	I0815 17:06:29.022650   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:29.023041   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:29.023061   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:29.023018   21085 retry.go:31] will retry after 604.334735ms: waiting for machine to come up
	I0815 17:06:29.630084   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:29.630530   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:29.630556   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:29.630479   21085 retry.go:31] will retry after 909.637578ms: waiting for machine to come up
	I0815 17:06:30.542174   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:30.542483   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:30.542505   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:30.542453   21085 retry.go:31] will retry after 1.052124898s: waiting for machine to come up
	I0815 17:06:31.595839   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:31.596218   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:31.596245   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:31.596183   21085 retry.go:31] will retry after 1.090139908s: waiting for machine to come up
	I0815 17:06:32.688285   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:32.688699   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:32.688728   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:32.688650   21085 retry.go:31] will retry after 1.368129262s: waiting for machine to come up
	I0815 17:06:34.059099   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:34.059591   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:34.059618   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:34.059543   21085 retry.go:31] will retry after 1.880437354s: waiting for machine to come up
	I0815 17:06:35.941488   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:35.941974   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:35.941999   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:35.941929   21085 retry.go:31] will retry after 2.253065386s: waiting for machine to come up
	I0815 17:06:38.197640   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:38.198068   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:38.198086   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:38.198040   21085 retry.go:31] will retry after 2.853822719s: waiting for machine to come up
	I0815 17:06:41.053413   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:41.053943   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:41.053974   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:41.053890   21085 retry.go:31] will retry after 2.751803169s: waiting for machine to come up
	I0815 17:06:43.808783   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:43.809125   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:43.809153   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:43.809109   21085 retry.go:31] will retry after 4.993758719s: waiting for machine to come up
	I0815 17:06:48.807086   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:48.807477   21063 main.go:141] libmachine: (addons-973562) Found IP for machine: 192.168.39.200
	I0815 17:06:48.807495   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has current primary IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:48.807501   21063 main.go:141] libmachine: (addons-973562) Reserving static IP address...
	I0815 17:06:48.807868   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find host DHCP lease matching {name: "addons-973562", mac: "52:54:00:71:0b:0e", ip: "192.168.39.200"} in network mk-addons-973562
	I0815 17:06:48.875728   21063 main.go:141] libmachine: (addons-973562) DBG | Getting to WaitForSSH function...
	I0815 17:06:48.875758   21063 main.go:141] libmachine: (addons-973562) Reserved static IP address: 192.168.39.200
	I0815 17:06:48.875771   21063 main.go:141] libmachine: (addons-973562) Waiting for SSH to be available...
	I0815 17:06:48.878185   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:48.878377   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562
	I0815 17:06:48.878403   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find defined IP address of network mk-addons-973562 interface with MAC address 52:54:00:71:0b:0e
	I0815 17:06:48.878582   21063 main.go:141] libmachine: (addons-973562) DBG | Using SSH client type: external
	I0815 17:06:48.878601   21063 main.go:141] libmachine: (addons-973562) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa (-rw-------)
	I0815 17:06:48.878685   21063 main.go:141] libmachine: (addons-973562) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 17:06:48.878715   21063 main.go:141] libmachine: (addons-973562) DBG | About to run SSH command:
	I0815 17:06:48.878730   21063 main.go:141] libmachine: (addons-973562) DBG | exit 0
	I0815 17:06:48.889134   21063 main.go:141] libmachine: (addons-973562) DBG | SSH cmd err, output: exit status 255: 
	I0815 17:06:48.889163   21063 main.go:141] libmachine: (addons-973562) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0815 17:06:48.889171   21063 main.go:141] libmachine: (addons-973562) DBG | command : exit 0
	I0815 17:06:48.889179   21063 main.go:141] libmachine: (addons-973562) DBG | err     : exit status 255
	I0815 17:06:48.889223   21063 main.go:141] libmachine: (addons-973562) DBG | output  : 
	I0815 17:06:51.889905   21063 main.go:141] libmachine: (addons-973562) DBG | Getting to WaitForSSH function...
	I0815 17:06:51.892059   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:51.892507   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:51.892539   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:51.892705   21063 main.go:141] libmachine: (addons-973562) DBG | Using SSH client type: external
	I0815 17:06:51.892735   21063 main.go:141] libmachine: (addons-973562) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa (-rw-------)
	I0815 17:06:51.892765   21063 main.go:141] libmachine: (addons-973562) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 17:06:51.892777   21063 main.go:141] libmachine: (addons-973562) DBG | About to run SSH command:
	I0815 17:06:51.892790   21063 main.go:141] libmachine: (addons-973562) DBG | exit 0
	I0815 17:06:52.016381   21063 main.go:141] libmachine: (addons-973562) DBG | SSH cmd err, output: <nil>: 
	I0815 17:06:52.016665   21063 main.go:141] libmachine: (addons-973562) KVM machine creation complete!
	I0815 17:06:52.016952   21063 main.go:141] libmachine: (addons-973562) Calling .GetConfigRaw
	I0815 17:06:52.017450   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:52.017641   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:52.017792   21063 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 17:06:52.017807   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:06:52.018890   21063 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 17:06:52.018903   21063 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 17:06:52.018910   21063 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 17:06:52.018916   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.020950   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.021331   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.021362   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.021524   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:52.021690   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.021849   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.021983   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:52.022169   21063 main.go:141] libmachine: Using SSH client type: native
	I0815 17:06:52.022404   21063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0815 17:06:52.022417   21063 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 17:06:52.127769   21063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:06:52.127789   21063 main.go:141] libmachine: Detecting the provisioner...
	I0815 17:06:52.127796   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.130399   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.130715   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.130745   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.130947   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:52.131241   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.131413   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.131533   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:52.131785   21063 main.go:141] libmachine: Using SSH client type: native
	I0815 17:06:52.131943   21063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0815 17:06:52.131954   21063 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 17:06:52.241324   21063 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 17:06:52.241395   21063 main.go:141] libmachine: found compatible host: buildroot
	I0815 17:06:52.241408   21063 main.go:141] libmachine: Provisioning with buildroot...
	I0815 17:06:52.241421   21063 main.go:141] libmachine: (addons-973562) Calling .GetMachineName
	I0815 17:06:52.241662   21063 buildroot.go:166] provisioning hostname "addons-973562"
	I0815 17:06:52.241688   21063 main.go:141] libmachine: (addons-973562) Calling .GetMachineName
	I0815 17:06:52.241857   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.244517   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.244863   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.244892   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.245007   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:52.245201   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.245347   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.245492   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:52.245659   21063 main.go:141] libmachine: Using SSH client type: native
	I0815 17:06:52.245843   21063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0815 17:06:52.245856   21063 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-973562 && echo "addons-973562" | sudo tee /etc/hostname
	I0815 17:06:52.368381   21063 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-973562
	
	I0815 17:06:52.368402   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.370731   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.371058   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.371097   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.371229   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:52.371392   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.371564   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.371697   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:52.371845   21063 main.go:141] libmachine: Using SSH client type: native
	I0815 17:06:52.372011   21063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0815 17:06:52.372032   21063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-973562' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-973562/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-973562' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:06:52.490787   21063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:06:52.490817   21063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 17:06:52.490861   21063 buildroot.go:174] setting up certificates
	I0815 17:06:52.490874   21063 provision.go:84] configureAuth start
	I0815 17:06:52.490886   21063 main.go:141] libmachine: (addons-973562) Calling .GetMachineName
	I0815 17:06:52.491131   21063 main.go:141] libmachine: (addons-973562) Calling .GetIP
	I0815 17:06:52.493378   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.493682   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.493709   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.493870   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.495814   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.496141   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.496167   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.496260   21063 provision.go:143] copyHostCerts
	I0815 17:06:52.496333   21063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 17:06:52.496465   21063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 17:06:52.496561   21063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 17:06:52.496630   21063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.addons-973562 san=[127.0.0.1 192.168.39.200 addons-973562 localhost minikube]
	I0815 17:06:52.582245   21063 provision.go:177] copyRemoteCerts
	I0815 17:06:52.582303   21063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:06:52.582323   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.585055   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.585398   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.585426   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.585594   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:52.585769   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.585923   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:52.586079   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:06:52.672532   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 17:06:52.698488   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 17:06:52.723546   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 17:06:52.746433   21063 provision.go:87] duration metric: took 255.546254ms to configureAuth
	I0815 17:06:52.746474   21063 buildroot.go:189] setting minikube options for container-runtime
	I0815 17:06:52.746699   21063 config.go:182] Loaded profile config "addons-973562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:06:52.746775   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.749226   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.749539   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.749571   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.749750   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:52.749917   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.750072   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.750235   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:52.750379   21063 main.go:141] libmachine: Using SSH client type: native
	I0815 17:06:52.750598   21063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0815 17:06:52.750619   21063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:06:53.010465   21063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:06:53.010500   21063 main.go:141] libmachine: Checking connection to Docker...
	I0815 17:06:53.010511   21063 main.go:141] libmachine: (addons-973562) Calling .GetURL
	I0815 17:06:53.011924   21063 main.go:141] libmachine: (addons-973562) DBG | Using libvirt version 6000000
	I0815 17:06:53.013830   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.014152   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.014180   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.014291   21063 main.go:141] libmachine: Docker is up and running!
	I0815 17:06:53.014306   21063 main.go:141] libmachine: Reticulating splines...
	I0815 17:06:53.014314   21063 client.go:171] duration metric: took 27.632024015s to LocalClient.Create
	I0815 17:06:53.014341   21063 start.go:167] duration metric: took 27.632078412s to libmachine.API.Create "addons-973562"
	I0815 17:06:53.014357   21063 start.go:293] postStartSetup for "addons-973562" (driver="kvm2")
	I0815 17:06:53.014372   21063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:06:53.014392   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:53.014616   21063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:06:53.014638   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:53.016567   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.016877   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.016905   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.017056   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:53.017222   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:53.017373   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:53.017503   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:06:53.098968   21063 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:06:53.103157   21063 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 17:06:53.103183   21063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 17:06:53.103263   21063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 17:06:53.103293   21063 start.go:296] duration metric: took 88.925638ms for postStartSetup
	I0815 17:06:53.103329   21063 main.go:141] libmachine: (addons-973562) Calling .GetConfigRaw
	I0815 17:06:53.103874   21063 main.go:141] libmachine: (addons-973562) Calling .GetIP
	I0815 17:06:53.106235   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.106574   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.106607   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.106839   21063 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/config.json ...
	I0815 17:06:53.107053   21063 start.go:128] duration metric: took 27.742142026s to createHost
	I0815 17:06:53.107086   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:53.109206   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.109503   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.109530   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.109639   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:53.109797   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:53.109950   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:53.110046   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:53.110192   21063 main.go:141] libmachine: Using SSH client type: native
	I0815 17:06:53.110370   21063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0815 17:06:53.110381   21063 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 17:06:53.217031   21063 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723741613.197041360
	
	I0815 17:06:53.217057   21063 fix.go:216] guest clock: 1723741613.197041360
	I0815 17:06:53.217067   21063 fix.go:229] Guest: 2024-08-15 17:06:53.19704136 +0000 UTC Remote: 2024-08-15 17:06:53.10706892 +0000 UTC m=+27.845466349 (delta=89.97244ms)
	I0815 17:06:53.217091   21063 fix.go:200] guest clock delta is within tolerance: 89.97244ms
	I0815 17:06:53.217099   21063 start.go:83] releasing machines lock for "addons-973562", held for 27.852271909s
	I0815 17:06:53.217123   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:53.217381   21063 main.go:141] libmachine: (addons-973562) Calling .GetIP
	I0815 17:06:53.219809   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.220126   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.220150   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.220293   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:53.220778   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:53.220940   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:53.221015   21063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:06:53.221061   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:53.221171   21063 ssh_runner.go:195] Run: cat /version.json
	I0815 17:06:53.221191   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:53.223835   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.223924   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.224160   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.224185   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.224217   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.224237   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.224303   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:53.224517   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:53.224540   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:53.224706   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:53.224739   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:53.224874   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:06:53.224939   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:53.225081   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:06:53.327198   21063 ssh_runner.go:195] Run: systemctl --version
	I0815 17:06:53.333352   21063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:06:53.493783   21063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 17:06:53.499868   21063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 17:06:53.499943   21063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:06:53.515938   21063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 17:06:53.515961   21063 start.go:495] detecting cgroup driver to use...
	I0815 17:06:53.516020   21063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:06:53.530930   21063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:06:53.544880   21063 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:06:53.544944   21063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:06:53.558070   21063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:06:53.571022   21063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:06:53.679728   21063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:06:53.828460   21063 docker.go:233] disabling docker service ...
	I0815 17:06:53.828542   21063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:06:53.843608   21063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:06:53.855704   21063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:06:53.999429   21063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:06:54.127017   21063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:06:54.140531   21063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:06:54.157960   21063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:06:54.158016   21063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.167667   21063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:06:54.167721   21063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.177591   21063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.187666   21063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.197324   21063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:06:54.207319   21063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.217036   21063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.233456   21063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.243417   21063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:06:54.252476   21063 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 17:06:54.252554   21063 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 17:06:54.264858   21063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:06:54.274225   21063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:06:54.396433   21063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:06:54.532868   21063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:06:54.532971   21063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:06:54.537631   21063 start.go:563] Will wait 60s for crictl version
	I0815 17:06:54.537703   21063 ssh_runner.go:195] Run: which crictl
	I0815 17:06:54.541277   21063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:06:54.580399   21063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 17:06:54.580528   21063 ssh_runner.go:195] Run: crio --version
	I0815 17:06:54.608318   21063 ssh_runner.go:195] Run: crio --version
	I0815 17:06:54.638666   21063 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 17:06:54.639920   21063 main.go:141] libmachine: (addons-973562) Calling .GetIP
	I0815 17:06:54.642151   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:54.642461   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:54.642481   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:54.642800   21063 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 17:06:54.646908   21063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:06:54.658950   21063 kubeadm.go:883] updating cluster {Name:addons-973562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-973562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 17:06:54.659048   21063 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:06:54.659090   21063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:06:54.691040   21063 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 17:06:54.691101   21063 ssh_runner.go:195] Run: which lz4
	I0815 17:06:54.695285   21063 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 17:06:54.699359   21063 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 17:06:54.699381   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 17:06:55.937481   21063 crio.go:462] duration metric: took 1.242223137s to copy over tarball
	I0815 17:06:55.937548   21063 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 17:06:58.041515   21063 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.103934922s)
	I0815 17:06:58.041556   21063 crio.go:469] duration metric: took 2.104046807s to extract the tarball
	I0815 17:06:58.041567   21063 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 17:06:58.078406   21063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:06:58.119965   21063 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:06:58.119986   21063 cache_images.go:84] Images are preloaded, skipping loading
	I0815 17:06:58.119995   21063 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.31.0 crio true true} ...
	I0815 17:06:58.120117   21063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-973562 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-973562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:06:58.120205   21063 ssh_runner.go:195] Run: crio config
	I0815 17:06:58.166976   21063 cni.go:84] Creating CNI manager for ""
	I0815 17:06:58.166994   21063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 17:06:58.167003   21063 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 17:06:58.167022   21063 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-973562 NodeName:addons-973562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 17:06:58.167168   21063 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-973562"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 17:06:58.167242   21063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:06:58.177137   21063 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:06:58.177198   21063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 17:06:58.186673   21063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0815 17:06:58.202335   21063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:06:58.217882   21063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0815 17:06:58.234161   21063 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I0815 17:06:58.237763   21063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:06:58.249671   21063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:06:58.355061   21063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:06:58.370643   21063 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562 for IP: 192.168.39.200
	I0815 17:06:58.370667   21063 certs.go:194] generating shared ca certs ...
	I0815 17:06:58.370685   21063 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.370823   21063 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 17:06:58.566505   21063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt ...
	I0815 17:06:58.566532   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt: {Name:mk7b3c266988c3bf447b0d5846e34249420d4046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.566712   21063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key ...
	I0815 17:06:58.566725   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key: {Name:mk989a7f98c08ab9bacc7aac0e5b4671d9feab8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.566822   21063 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 17:06:58.663712   21063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt ...
	I0815 17:06:58.663739   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt: {Name:mk107ae151027de9139f76d73fd7a7d8b4333fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.663898   21063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key ...
	I0815 17:06:58.663910   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key: {Name:mkbad363b34cde2c9295a09e950bde4265a6d910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.664006   21063 certs.go:256] generating profile certs ...
	I0815 17:06:58.664058   21063 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.key
	I0815 17:06:58.664074   21063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt with IP's: []
	I0815 17:06:58.768925   21063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt ...
	I0815 17:06:58.768953   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: {Name:mk028cecbbd4c3c93083dc96c7b6732f9f2b764d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.769113   21063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.key ...
	I0815 17:06:58.769130   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.key: {Name:mk8eca70504a08a964c72dbf724341e25251229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.769223   21063 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.key.a68f4c30
	I0815 17:06:58.769248   21063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.crt.a68f4c30 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.200]
	I0815 17:06:59.250823   21063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.crt.a68f4c30 ...
	I0815 17:06:59.250851   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.crt.a68f4c30: {Name:mk452cee6d844e8db9f303b800b52ce910162df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:59.250997   21063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.key.a68f4c30 ...
	I0815 17:06:59.251010   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.key.a68f4c30: {Name:mk5364c42f311a455bf4483779b819cd363dcebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:59.251079   21063 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.crt.a68f4c30 -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.crt
	I0815 17:06:59.251153   21063 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.key.a68f4c30 -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.key
	I0815 17:06:59.251198   21063 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.key
	I0815 17:06:59.251216   21063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.crt with IP's: []
	I0815 17:06:59.466006   21063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.crt ...
	I0815 17:06:59.466035   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.crt: {Name:mkdb18e9115569cc98aa6c1385fdd768e627bc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:59.466183   21063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.key ...
	I0815 17:06:59.466193   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.key: {Name:mk9a467ec3acb971b0b82158cf4e08112b45e20f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:59.466350   21063 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 17:06:59.466383   21063 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 17:06:59.466406   21063 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:06:59.466429   21063 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 17:06:59.467006   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:06:59.496035   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:06:59.519242   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:06:59.541831   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 17:06:59.564643   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 17:06:59.586785   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 17:06:59.608298   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:06:59.630044   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:06:59.652196   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:06:59.674551   21063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 17:06:59.690517   21063 ssh_runner.go:195] Run: openssl version
	I0815 17:06:59.696100   21063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:06:59.706870   21063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:06:59.711145   21063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:06:59.711200   21063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:06:59.716928   21063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:06:59.727695   21063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:06:59.731500   21063 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 17:06:59.731544   21063 kubeadm.go:392] StartCluster: {Name:addons-973562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-973562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:06:59.731607   21063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 17:06:59.731642   21063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 17:06:59.771186   21063 cri.go:89] found id: ""
	I0815 17:06:59.771261   21063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 17:06:59.781486   21063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 17:06:59.791032   21063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 17:06:59.800272   21063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 17:06:59.800289   21063 kubeadm.go:157] found existing configuration files:
	
	I0815 17:06:59.800334   21063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 17:06:59.809341   21063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 17:06:59.809404   21063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 17:06:59.818541   21063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 17:06:59.827474   21063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 17:06:59.827527   21063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 17:06:59.836888   21063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 17:06:59.845822   21063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 17:06:59.845863   21063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 17:06:59.855219   21063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 17:06:59.864182   21063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 17:06:59.864228   21063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 17:06:59.873365   21063 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 17:06:59.923497   21063 kubeadm.go:310] W0815 17:06:59.909291     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:06:59.924454   21063 kubeadm.go:310] W0815 17:06:59.910526     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:07:00.041221   21063 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 17:07:09.465409   21063 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 17:07:09.465482   21063 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 17:07:09.465578   21063 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 17:07:09.465678   21063 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 17:07:09.465812   21063 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 17:07:09.465902   21063 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 17:07:09.467564   21063 out.go:235]   - Generating certificates and keys ...
	I0815 17:07:09.467652   21063 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 17:07:09.467726   21063 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 17:07:09.467817   21063 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 17:07:09.467904   21063 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 17:07:09.467967   21063 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 17:07:09.468009   21063 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 17:07:09.468061   21063 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 17:07:09.468225   21063 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-973562 localhost] and IPs [192.168.39.200 127.0.0.1 ::1]
	I0815 17:07:09.468312   21063 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 17:07:09.468460   21063 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-973562 localhost] and IPs [192.168.39.200 127.0.0.1 ::1]
	I0815 17:07:09.468548   21063 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 17:07:09.468625   21063 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 17:07:09.468697   21063 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 17:07:09.468780   21063 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 17:07:09.468837   21063 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 17:07:09.468885   21063 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 17:07:09.468932   21063 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 17:07:09.468987   21063 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 17:07:09.469032   21063 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 17:07:09.469111   21063 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 17:07:09.469192   21063 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 17:07:09.470679   21063 out.go:235]   - Booting up control plane ...
	I0815 17:07:09.470756   21063 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 17:07:09.470869   21063 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 17:07:09.470956   21063 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 17:07:09.471093   21063 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 17:07:09.471216   21063 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 17:07:09.471282   21063 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 17:07:09.471427   21063 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 17:07:09.471519   21063 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 17:07:09.471603   21063 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.923908ms
	I0815 17:07:09.471710   21063 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 17:07:09.471790   21063 kubeadm.go:310] [api-check] The API server is healthy after 5.001999725s
	I0815 17:07:09.471911   21063 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 17:07:09.472090   21063 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 17:07:09.472169   21063 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 17:07:09.472400   21063 kubeadm.go:310] [mark-control-plane] Marking the node addons-973562 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 17:07:09.472494   21063 kubeadm.go:310] [bootstrap-token] Using token: u6ujye.vut6y5k8jcesrskl
	I0815 17:07:09.474028   21063 out.go:235]   - Configuring RBAC rules ...
	I0815 17:07:09.474138   21063 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 17:07:09.474245   21063 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 17:07:09.474415   21063 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 17:07:09.474565   21063 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 17:07:09.474728   21063 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 17:07:09.474830   21063 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 17:07:09.474989   21063 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 17:07:09.475053   21063 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 17:07:09.475119   21063 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 17:07:09.475128   21063 kubeadm.go:310] 
	I0815 17:07:09.475213   21063 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 17:07:09.475222   21063 kubeadm.go:310] 
	I0815 17:07:09.475333   21063 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 17:07:09.475347   21063 kubeadm.go:310] 
	I0815 17:07:09.475392   21063 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 17:07:09.475474   21063 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 17:07:09.475549   21063 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 17:07:09.475556   21063 kubeadm.go:310] 
	I0815 17:07:09.475623   21063 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 17:07:09.475633   21063 kubeadm.go:310] 
	I0815 17:07:09.475676   21063 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 17:07:09.475682   21063 kubeadm.go:310] 
	I0815 17:07:09.475740   21063 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 17:07:09.475824   21063 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 17:07:09.475883   21063 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 17:07:09.475889   21063 kubeadm.go:310] 
	I0815 17:07:09.475966   21063 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 17:07:09.476049   21063 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 17:07:09.476056   21063 kubeadm.go:310] 
	I0815 17:07:09.476120   21063 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token u6ujye.vut6y5k8jcesrskl \
	I0815 17:07:09.476224   21063 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 \
	I0815 17:07:09.476243   21063 kubeadm.go:310] 	--control-plane 
	I0815 17:07:09.476249   21063 kubeadm.go:310] 
	I0815 17:07:09.476311   21063 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 17:07:09.476316   21063 kubeadm.go:310] 
	I0815 17:07:09.476390   21063 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token u6ujye.vut6y5k8jcesrskl \
	I0815 17:07:09.476497   21063 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 
	I0815 17:07:09.476514   21063 cni.go:84] Creating CNI manager for ""
	I0815 17:07:09.476524   21063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 17:07:09.477944   21063 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 17:07:09.479065   21063 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 17:07:09.490273   21063 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 17:07:09.509311   21063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 17:07:09.509370   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:09.509380   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-973562 minikube.k8s.io/updated_at=2024_08_15T17_07_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=addons-973562 minikube.k8s.io/primary=true
	I0815 17:07:09.671048   21063 ops.go:34] apiserver oom_adj: -16
	I0815 17:07:09.671188   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:10.172141   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:10.671946   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:11.172027   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:11.672171   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:12.171202   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:12.671775   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:13.171846   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:13.671576   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:13.787398   21063 kubeadm.go:1113] duration metric: took 4.278072461s to wait for elevateKubeSystemPrivileges
	I0815 17:07:13.787440   21063 kubeadm.go:394] duration metric: took 14.055899581s to StartCluster
	I0815 17:07:13.787463   21063 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:07:13.787606   21063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:07:13.788173   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:07:13.788392   21063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 17:07:13.788430   21063 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:07:13.788508   21063 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 17:07:13.788601   21063 addons.go:69] Setting yakd=true in profile "addons-973562"
	I0815 17:07:13.788622   21063 addons.go:69] Setting gcp-auth=true in profile "addons-973562"
	I0815 17:07:13.788637   21063 addons.go:234] Setting addon yakd=true in "addons-973562"
	I0815 17:07:13.788692   21063 mustload.go:65] Loading cluster: addons-973562
	I0815 17:07:13.788707   21063 addons.go:69] Setting default-storageclass=true in profile "addons-973562"
	I0815 17:07:13.788708   21063 config.go:182] Loaded profile config "addons-973562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:07:13.788693   21063 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-973562"
	I0815 17:07:13.788959   21063 addons.go:69] Setting helm-tiller=true in profile "addons-973562"
	I0815 17:07:13.789030   21063 addons.go:234] Setting addon helm-tiller=true in "addons-973562"
	I0815 17:07:13.788909   21063 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-973562"
	I0815 17:07:13.789063   21063 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-973562"
	I0815 17:07:13.789072   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789103   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789098   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789124   21063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-973562"
	I0815 17:07:13.789063   21063 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-973562"
	I0815 17:07:13.789220   21063 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-973562"
	I0815 17:07:13.789230   21063 addons.go:69] Setting ingress=true in profile "addons-973562"
	I0815 17:07:13.789234   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789237   21063 addons.go:69] Setting cloud-spanner=true in profile "addons-973562"
	I0815 17:07:13.789244   21063 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-973562"
	I0815 17:07:13.789253   21063 addons.go:69] Setting ingress-dns=true in profile "addons-973562"
	I0815 17:07:13.789271   21063 addons.go:234] Setting addon cloud-spanner=true in "addons-973562"
	I0815 17:07:13.789277   21063 addons.go:234] Setting addon ingress-dns=true in "addons-973562"
	I0815 17:07:13.789296   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789308   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789435   21063 addons.go:69] Setting registry=true in profile "addons-973562"
	I0815 17:07:13.789487   21063 addons.go:234] Setting addon registry=true in "addons-973562"
	I0815 17:07:13.789517   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789694   21063 addons.go:69] Setting storage-provisioner=true in profile "addons-973562"
	I0815 17:07:13.789721   21063 addons.go:234] Setting addon storage-provisioner=true in "addons-973562"
	I0815 17:07:13.789729   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.789742   21063 addons.go:69] Setting inspektor-gadget=true in profile "addons-973562"
	I0815 17:07:13.789751   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789756   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.789772   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.789778   21063 addons.go:234] Setting addon inspektor-gadget=true in "addons-973562"
	I0815 17:07:13.789794   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.789803   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.789806   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789874   21063 addons.go:69] Setting volumesnapshots=true in profile "addons-973562"
	I0815 17:07:13.789917   21063 addons.go:234] Setting addon volumesnapshots=true in "addons-973562"
	I0815 17:07:13.789951   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789997   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790044   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790164   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790191   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790260   21063 addons.go:69] Setting volcano=true in profile "addons-973562"
	I0815 17:07:13.790283   21063 addons.go:69] Setting metrics-server=true in profile "addons-973562"
	I0815 17:07:13.790326   21063 addons.go:234] Setting addon volcano=true in "addons-973562"
	I0815 17:07:13.790361   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.790366   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790413   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790531   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790568   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790264   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790646   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790749   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790799   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790876   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790909   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790913   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790969   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.789760   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.791154   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.791217   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.791421   21063 addons.go:234] Setting addon ingress=true in "addons-973562"
	I0815 17:07:13.791679   21063 config.go:182] Loaded profile config "addons-973562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:07:13.791847   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.802121   21063 out.go:177] * Verifying Kubernetes components...
	I0815 17:07:13.802591   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.802689   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.808432   21063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:07:13.790329   21063 addons.go:234] Setting addon metrics-server=true in "addons-973562"
	I0815 17:07:13.808855   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.809400   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.809459   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.812179   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33667
	I0815 17:07:13.790885   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.812303   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.812441   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35447
	I0815 17:07:13.812881   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.813170   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.813529   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.813565   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.813681   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.813701   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.814016   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.814038   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.814706   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.814719   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.814754   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.814760   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.814989   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0815 17:07:13.821506   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.821553   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.821885   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.828448   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0815 17:07:13.828624   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I0815 17:07:13.829811   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.829894   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.836898   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.837093   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0815 17:07:13.837215   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0815 17:07:13.837388   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.837567   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.838142   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.838186   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.839157   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.839369   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.839389   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.839444   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.839474   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.839764   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.839849   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.839925   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.840117   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.840137   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.840207   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.840457   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0815 17:07:13.840640   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.840805   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.841457   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.841496   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.845332   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.845496   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.845518   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.845886   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.845924   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.846837   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.847494   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.847531   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.847766   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.848332   21063 addons.go:234] Setting addon default-storageclass=true in "addons-973562"
	I0815 17:07:13.848380   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.848442   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.848477   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.848950   21063 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-973562"
	I0815 17:07:13.848984   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.849545   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.854777   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0815 17:07:13.855279   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.859114   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40281
	I0815 17:07:13.860518   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I0815 17:07:13.861061   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.861199   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.861322   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.861272   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.861467   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.861571   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.861584   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.861922   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.862484   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.862519   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.862918   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.862940   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.863305   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.863842   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.863877   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.864460   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.864476   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.864898   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.865946   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.865988   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.874875   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42515
	I0815 17:07:13.875131   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0815 17:07:13.875624   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.875730   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.876307   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.876328   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.876435   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.876456   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.876722   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.876778   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.877292   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.877329   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.877549   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33081
	I0815 17:07:13.879745   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44623
	I0815 17:07:13.879747   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.879828   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.880084   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.880602   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.880617   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.880683   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0815 17:07:13.881003   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.881116   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.881569   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.881586   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.881637   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.881644   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.881669   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.882153   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.882316   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.882539   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.882562   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.883586   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.884131   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.884164   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.884374   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.884429   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0815 17:07:13.884437   21063 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 17:07:13.884818   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.885255   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.885280   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.885581   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.885760   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.885920   21063 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 17:07:13.885939   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 17:07:13.885954   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.886038   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 17:07:13.886911   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33027
	I0815 17:07:13.887294   21063 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 17:07:13.887312   21063 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 17:07:13.887331   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.888018   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.888264   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:13.888276   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:13.890081   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:13.890090   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.890122   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:13.890134   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:13.890143   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:13.890156   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:13.890562   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:13.890580   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:13.890596   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:13.890604   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36935
	W0815 17:07:13.890693   21063 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0815 17:07:13.890962   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.890990   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.891160   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.891339   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.891487   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.891540   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.891657   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.891950   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.891968   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.892100   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.892288   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.892442   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.892573   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.897696   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0815 17:07:13.898242   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.898799   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.898819   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.899184   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.899380   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.901052   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.902476   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.903035   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35117
	I0815 17:07:13.903220   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.903284   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.903299   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.903825   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.903885   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.903961   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.903980   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.904188   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.904505   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.904527   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.904975   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.905193   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.905263   21063 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 17:07:13.906331   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39037
	I0815 17:07:13.906477   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.906606   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.906754   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.906840   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.906915   21063 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 17:07:13.906926   21063 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 17:07:13.906943   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.907302   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.907316   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.908026   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.908169   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 17:07:13.908353   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.909150   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.909233   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.909276   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.909559   21063 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 17:07:13.910792   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 17:07:13.911467   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.910803   21063 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 17:07:13.911935   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.911972   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.912152   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.912335   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.912469   21063 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:07:13.912520   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.912666   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.913719   21063 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 17:07:13.913894   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39973
	I0815 17:07:13.913925   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0815 17:07:13.914464   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 17:07:13.914553   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.914639   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.915157   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.915174   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.915319   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.915332   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.915709   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.915769   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.915822   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0815 17:07:13.915916   21063 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 17:07:13.915927   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 17:07:13.915939   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.915971   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.916154   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.916278   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.916340   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32879
	I0815 17:07:13.916570   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.916582   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.916797   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.916889   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.917503   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.917532   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.918098   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.918114   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.918390   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0815 17:07:13.918477   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.918555   21063 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:07:13.918790   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.918895   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.919602   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 17:07:13.919739   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.919954   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.919970   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.920269   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.920284   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.920303   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.920467   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.920556   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.920653   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.920686   21063 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 17:07:13.920710   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 17:07:13.920730   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.920998   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.921035   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.921250   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.921388   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.921661   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.922245   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.922571   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 17:07:13.922588   21063 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 17:07:13.922627   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.922644   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.923869   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.923937   21063 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:07:13.923963   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 17:07:13.923979   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.923939   21063 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 17:07:13.924795   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.925088   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.925600   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0815 17:07:13.925606   21063 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 17:07:13.925624   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.925626   21063 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 17:07:13.925647   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.925785   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.925908   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.926019   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.927959   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 17:07:13.928387   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.928585   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.928617   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.928629   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.928848   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.929039   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.929155   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.929998   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.930353   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.930382   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.930452   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 17:07:13.930522   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.930662   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.930773   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.930897   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.931743   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 17:07:13.931767   21063 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 17:07:13.931785   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.933944   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0815 17:07:13.934253   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.934678   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.934693   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.934840   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.934982   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.935136   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.935212   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.935231   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.935247   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46129
	I0815 17:07:13.935492   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.935683   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.935902   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.936028   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.936345   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.936748   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.936894   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.936913   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.937486   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.937710   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.938647   21063 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0815 17:07:13.939121   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.940069   21063 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0815 17:07:13.940089   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0815 17:07:13.940106   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.940800   21063 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 17:07:13.942039   21063 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 17:07:13.942056   21063 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 17:07:13.942074   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.943882   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.944413   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.944433   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.944678   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.944853   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.945002   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.945136   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.945394   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0815 17:07:13.945676   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.946360   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.946376   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.947077   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.947121   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.947301   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.947582   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.947604   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.947785   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.947967   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.948129   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.948282   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.949210   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.949804   21063 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 17:07:13.949822   21063 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 17:07:13.949837   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.951180   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I0815 17:07:13.951505   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36233
	I0815 17:07:13.951659   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.952116   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.952136   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.952200   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.952665   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.952686   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.952748   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.952878   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.952935   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.952963   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.953237   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.953309   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.953329   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.953497   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.953728   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.953880   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.954017   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.954266   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37045
	I0815 17:07:13.954649   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.954729   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.955101   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.955494   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.955516   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.955864   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.956039   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.956703   21063 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 17:07:13.956705   21063 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 17:07:13.957905   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I0815 17:07:13.958039   21063 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 17:07:13.958056   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 17:07:13.958073   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.958099   21063 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 17:07:13.958112   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 17:07:13.958128   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.958212   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.959207   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.959225   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.960570   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.960936   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.961674   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.961679   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.962212   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.962232   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.962235   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.962249   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.962310   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.962314   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.962478   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.962499   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.962711   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.962744   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.962902   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.962939   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.962903   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.964630   21063 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W0815 17:07:13.965638   21063 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47196->192.168.39.200:22: read: connection reset by peer
	I0815 17:07:13.965664   21063 retry.go:31] will retry after 142.710304ms: ssh: handshake failed: read tcp 192.168.39.1:47196->192.168.39.200:22: read: connection reset by peer
	I0815 17:07:13.967200   21063 out.go:177]   - Using image docker.io/busybox:stable
	I0815 17:07:13.968604   21063 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 17:07:13.968618   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 17:07:13.968631   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.973812   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.973812   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.973870   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.973883   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.974034   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.974194   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.974329   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	W0815 17:07:14.110646   21063 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47208->192.168.39.200:22: read: connection reset by peer
	I0815 17:07:14.110674   21063 retry.go:31] will retry after 445.724768ms: ssh: handshake failed: read tcp 192.168.39.1:47208->192.168.39.200:22: read: connection reset by peer
	I0815 17:07:14.262372   21063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:07:14.262543   21063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 17:07:14.375129   21063 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 17:07:14.375158   21063 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 17:07:14.384985   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 17:07:14.385011   21063 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 17:07:14.412926   21063 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 17:07:14.412948   21063 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 17:07:14.428757   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 17:07:14.443500   21063 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 17:07:14.443517   21063 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 17:07:14.458904   21063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 17:07:14.458923   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 17:07:14.461748   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 17:07:14.486682   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:07:14.492416   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 17:07:14.516362   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 17:07:14.537671   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 17:07:14.592669   21063 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 17:07:14.592698   21063 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 17:07:14.611198   21063 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0815 17:07:14.611219   21063 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0815 17:07:14.652036   21063 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 17:07:14.652057   21063 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 17:07:14.671402   21063 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 17:07:14.671420   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 17:07:14.705166   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 17:07:14.705187   21063 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 17:07:14.764005   21063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 17:07:14.764022   21063 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 17:07:14.764909   21063 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 17:07:14.764930   21063 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 17:07:14.794923   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 17:07:14.797882   21063 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 17:07:14.797909   21063 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 17:07:14.850865   21063 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 17:07:14.850893   21063 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 17:07:14.864941   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 17:07:14.864966   21063 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 17:07:14.950936   21063 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 17:07:14.950957   21063 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0815 17:07:15.071809   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 17:07:15.071833   21063 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 17:07:15.080435   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 17:07:15.081262   21063 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 17:07:15.081280   21063 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 17:07:15.090764   21063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:07:15.090786   21063 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 17:07:15.121111   21063 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 17:07:15.121139   21063 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 17:07:15.145060   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 17:07:15.145089   21063 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 17:07:15.214561   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 17:07:15.255160   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:07:15.288108   21063 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 17:07:15.288132   21063 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 17:07:15.299760   21063 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:07:15.299779   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 17:07:15.305822   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 17:07:15.305843   21063 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 17:07:15.407280   21063 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 17:07:15.407338   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 17:07:15.427052   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:07:15.508592   21063 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 17:07:15.508622   21063 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 17:07:15.567781   21063 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 17:07:15.567801   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 17:07:15.602699   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 17:07:15.762719   21063 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 17:07:15.762750   21063 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 17:07:15.784906   21063 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 17:07:15.784931   21063 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 17:07:15.968366   21063 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 17:07:15.968392   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 17:07:16.032931   21063 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 17:07:16.032957   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 17:07:16.101899   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 17:07:16.169209   21063 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 17:07:16.169239   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 17:07:16.382155   21063 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 17:07:16.382182   21063 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 17:07:16.589610   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 17:07:16.633092   21063 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.370682401s)
	I0815 17:07:16.633128   21063 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.37054985s)
	I0815 17:07:16.633151   21063 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0815 17:07:16.633999   21063 node_ready.go:35] waiting up to 6m0s for node "addons-973562" to be "Ready" ...
	I0815 17:07:16.641592   21063 node_ready.go:49] node "addons-973562" has status "Ready":"True"
	I0815 17:07:16.641614   21063 node_ready.go:38] duration metric: took 7.591501ms for node "addons-973562" to be "Ready" ...
	I0815 17:07:16.641624   21063 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:07:16.707336   21063 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:17.181989   21063 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-973562" context rescaled to 1 replicas
	I0815 17:07:18.755881   21063 pod_ready.go:103] pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:20.968333   21063 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 17:07:20.968368   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:20.971622   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:20.972045   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:20.972072   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:20.972245   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:20.972451   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:20.972629   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:20.972761   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:21.298553   21063 pod_ready.go:103] pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:21.395461   21063 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 17:07:21.467782   21063 addons.go:234] Setting addon gcp-auth=true in "addons-973562"
	I0815 17:07:21.467835   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:21.468182   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:21.468209   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:21.483908   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44593
	I0815 17:07:21.484353   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:21.484846   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:21.484871   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:21.485191   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:21.485656   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:21.485696   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:21.500871   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0815 17:07:21.501265   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:21.501706   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:21.501723   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:21.502023   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:21.502227   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:21.503726   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:21.503965   21063 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 17:07:21.503992   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:21.506618   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:21.506947   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:21.506982   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:21.507113   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:21.507273   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:21.507434   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:21.507680   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:22.485649   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.023875475s)
	I0815 17:07:22.485706   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.485717   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.485730   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.999018414s)
	I0815 17:07:22.485767   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.056982201s)
	I0815 17:07:22.485797   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.485808   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.485838   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.993396709s)
	I0815 17:07:22.485772   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.485860   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.485887   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.969504686s)
	I0815 17:07:22.485906   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.485915   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486004   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.94830673s)
	I0815 17:07:22.486065   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486073   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486076   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486082   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486154   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.691207161s)
	I0815 17:07:22.486188   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486199   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486276   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.405820618s)
	I0815 17:07:22.486290   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486298   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486334   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486357   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486368   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486375   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486422   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.231226215s)
	I0815 17:07:22.486433   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486438   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486442   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486447   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486451   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486458   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486506   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486514   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486522   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486529   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486548   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.486556   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486565   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486572   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486579   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486598   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486606   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486614   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486621   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486651   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.059572098s)
	I0815 17:07:22.486667   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	W0815 17:07:22.486679   21063 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 17:07:22.486689   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486700   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486710   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486709   21063 retry.go:31] will retry after 177.288932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 17:07:22.486716   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486358   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.271773766s)
	I0815 17:07:22.486734   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486743   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486755   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.486776   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486782   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486789   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486795   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486851   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.486874   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486881   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.487553   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.487580   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.487587   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.487595   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.487602   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.487651   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.487668   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.487677   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.487684   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.487690   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.487725   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.487741   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.487748   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.487756   21063 addons.go:475] Verifying addon registry=true in "addons-973562"
	I0815 17:07:22.487941   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.487964   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.487971   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.488126   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.488145   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.488165   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.488180   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.488248   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.488276   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.488285   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.488477   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.488519   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.488527   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.490099   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.490124   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.490131   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.490139   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.490145   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.490198   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.490215   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.490221   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.490229   21063 addons.go:475] Verifying addon metrics-server=true in "addons-973562"
	I0815 17:07:22.490704   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.490732   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.490740   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.490757   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.490768   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.490776   21063 addons.go:475] Verifying addon ingress=true in "addons-973562"
	I0815 17:07:22.490827   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.888079663s)
	I0815 17:07:22.491068   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.491081   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.490870   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.490891   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.491137   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.490901   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.388968281s)
	I0815 17:07:22.491180   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.491188   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.491358   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.491368   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.491376   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.491383   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.491483   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.492668   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.492681   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.492880   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.492976   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.492992   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.493000   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.493182   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.493195   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.493323   21063 out.go:177] * Verifying ingress addon...
	I0815 17:07:22.493361   21063 out.go:177] * Verifying registry addon...
	I0815 17:07:22.494179   21063 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-973562 service yakd-dashboard -n yakd-dashboard
	
	I0815 17:07:22.495634   21063 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 17:07:22.495717   21063 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0815 17:07:22.506574   21063 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 17:07:22.506603   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:22.508811   21063 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 17:07:22.508828   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:22.518529   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.518546   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.518768   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.518807   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.518824   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	W0815 17:07:22.518912   21063 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0815 17:07:22.523257   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.523271   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.523522   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.523542   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.664518   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:07:23.011574   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:23.011757   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:23.156665   21063 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.652676046s)
	I0815 17:07:23.156692   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.56698888s)
	I0815 17:07:23.156750   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:23.156767   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:23.157221   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:23.157236   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:23.157249   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:23.157266   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:23.157275   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:23.157583   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:23.157606   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:23.157622   21063 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-973562"
	I0815 17:07:23.158180   21063 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:07:23.159036   21063 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 17:07:23.160431   21063 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 17:07:23.161217   21063 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 17:07:23.161587   21063 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 17:07:23.161607   21063 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 17:07:23.199800   21063 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 17:07:23.199821   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:23.265781   21063 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 17:07:23.265802   21063 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 17:07:23.366073   21063 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 17:07:23.366092   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 17:07:23.436364   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 17:07:23.501663   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:23.502623   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:23.835088   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:23.850569   21063 pod_ready.go:103] pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:24.000587   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:24.001655   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:24.166701   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:24.500942   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:24.503795   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:24.670156   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:24.773014   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.108457579s)
	I0815 17:07:24.773056   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:24.773070   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:24.773324   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:24.773338   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:24.773380   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:24.773396   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:24.773407   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:24.773728   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:24.773751   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:24.773754   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:25.008814   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:25.009134   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:25.186592   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:25.306074   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.869671841s)
	I0815 17:07:25.306122   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:25.306138   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:25.306396   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:25.306445   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:25.306454   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:25.306468   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:25.306476   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:25.306713   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:25.306727   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:25.306810   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:25.308442   21063 addons.go:475] Verifying addon gcp-auth=true in "addons-973562"
	I0815 17:07:25.309945   21063 out.go:177] * Verifying gcp-auth addon...
	I0815 17:07:25.311924   21063 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 17:07:25.316520   21063 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 17:07:25.316537   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:25.530736   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:25.534704   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:25.665365   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:25.816849   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:26.000718   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:26.001062   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:26.166146   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:26.214138   21063 pod_ready.go:98] pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:25 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.200 HostIPs:[{IP:192.168.39
.200}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-15 17:07:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-15 17:07:18 +0000 UTC,FinishedAt:2024-08-15 17:07:24 +0000 UTC,ContainerID:cri-o://5b02889d8198258a7ed67e8550b23ee65d86cd7d63e350ac76b79256c1b4d57a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://5b02889d8198258a7ed67e8550b23ee65d86cd7d63e350ac76b79256c1b4d57a Started:0xc001f4ca60 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001c8dc40} {Name:kube-api-access-mb8pm MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001c8dc50}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0815 17:07:26.214173   21063 pod_ready.go:82] duration metric: took 9.506796195s for pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace to be "Ready" ...
	E0815 17:07:26.214186   21063 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:25 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.200 HostIPs:[{IP:192.168.39.200}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-15 17:07:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-15 17:07:18 +0000 UTC,FinishedAt:2024-08-15 17:07:24 +0000 UTC,ContainerID:cri-o://5b02889d8198258a7ed67e8550b23ee65d86cd7d63e350ac76b79256c1b4d57a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://5b02889d8198258a7ed67e8550b23ee65d86cd7d63e350ac76b79256c1b4d57a Started:0xc001f4ca60 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001c8dc40} {Name:kube-api-access-mb8pm MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001c8dc50}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0815 17:07:26.214197   21063 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpjgp" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.224574   21063 pod_ready.go:93] pod "coredns-6f6b679f8f-mpjgp" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:26.224592   21063 pod_ready.go:82] duration metric: took 10.386648ms for pod "coredns-6f6b679f8f-mpjgp" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.224600   21063 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.233555   21063 pod_ready.go:93] pod "etcd-addons-973562" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:26.233571   21063 pod_ready.go:82] duration metric: took 8.966544ms for pod "etcd-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.233581   21063 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.244554   21063 pod_ready.go:93] pod "kube-apiserver-addons-973562" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:26.244573   21063 pod_ready.go:82] duration metric: took 10.985949ms for pod "kube-apiserver-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.244581   21063 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.249467   21063 pod_ready.go:93] pod "kube-controller-manager-addons-973562" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:26.249503   21063 pod_ready.go:82] duration metric: took 4.91574ms for pod "kube-controller-manager-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.249510   21063 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9zjlq" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.315436   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:26.500411   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:26.501068   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:26.611330   21063 pod_ready.go:93] pod "kube-proxy-9zjlq" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:26.611358   21063 pod_ready.go:82] duration metric: took 361.840339ms for pod "kube-proxy-9zjlq" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.611372   21063 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.666206   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:26.815065   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:27.000977   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:27.002625   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:27.012031   21063 pod_ready.go:93] pod "kube-scheduler-addons-973562" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:27.012052   21063 pod_ready.go:82] duration metric: took 400.671098ms for pod "kube-scheduler-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:27.012065   21063 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9rkx2" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:27.167202   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:27.316701   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:27.500196   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:27.500953   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:27.666463   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:27.814934   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:27.999909   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:28.000941   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:28.165798   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:28.314834   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:28.499555   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:28.499964   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:28.666757   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:28.816318   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:29.000307   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:29.000349   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:29.024541   21063 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-9rkx2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:29.420067   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:29.422397   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:29.501199   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:29.501683   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:29.665506   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:29.815560   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:30.001003   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:30.001162   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:30.166171   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:30.315552   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:30.504040   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:30.504409   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:30.666672   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:30.815379   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:30.999702   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:31.000071   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:31.165439   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:31.315682   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:31.512681   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:31.513116   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:31.523120   21063 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-9rkx2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:31.666324   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:31.815858   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:32.000054   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:32.000454   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:32.167309   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:32.314688   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:32.500418   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:32.502128   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:32.667134   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:32.815203   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:33.000371   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:33.001099   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:33.166087   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:33.315027   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:33.505916   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:33.506371   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:33.525350   21063 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9rkx2" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:33.525374   21063 pod_ready.go:82] duration metric: took 6.513301495s for pod "nvidia-device-plugin-daemonset-9rkx2" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:33.525384   21063 pod_ready.go:39] duration metric: took 16.883746774s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:07:33.525406   21063 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:07:33.525469   21063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:07:33.542585   21063 api_server.go:72] duration metric: took 19.754118352s to wait for apiserver process to appear ...
	I0815 17:07:33.542605   21063 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:07:33.542622   21063 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0815 17:07:33.547505   21063 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0815 17:07:33.548309   21063 api_server.go:141] control plane version: v1.31.0
	I0815 17:07:33.548328   21063 api_server.go:131] duration metric: took 5.716889ms to wait for apiserver health ...
	I0815 17:07:33.548336   21063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 17:07:33.557378   21063 system_pods.go:59] 18 kube-system pods found
	I0815 17:07:33.557400   21063 system_pods.go:61] "coredns-6f6b679f8f-mpjgp" [a9818a08-6d11-41fe-81d9-afed636031df] Running
	I0815 17:07:33.557409   21063 system_pods.go:61] "csi-hostpath-attacher-0" [596b55e2-5cc7-4818-9e03-9e5bc52c081a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0815 17:07:33.557417   21063 system_pods.go:61] "csi-hostpath-resizer-0" [090d6c78-cb3b-44b5-b749-f53c4ec2fd5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0815 17:07:33.557425   21063 system_pods.go:61] "csi-hostpathplugin-csfg8" [0b7bd1d3-48f6-4f63-b5d1-bb152345a4f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0815 17:07:33.557429   21063 system_pods.go:61] "etcd-addons-973562" [27923b84-f63e-402c-b3b6-f21c39b7d672] Running
	I0815 17:07:33.557433   21063 system_pods.go:61] "kube-apiserver-addons-973562" [72f2bb55-2489-43d7-8831-425ddcab1c67] Running
	I0815 17:07:33.557440   21063 system_pods.go:61] "kube-controller-manager-addons-973562" [0f3e0bf9-94c4-4d47-8e4b-c3aacd43a567] Running
	I0815 17:07:33.557445   21063 system_pods.go:61] "kube-ingress-dns-minikube" [af9ffb5d-8172-478e-bf4f-ce5fafaba75b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0815 17:07:33.557452   21063 system_pods.go:61] "kube-proxy-9zjlq" [0ade0f95-ff6d-402e-8491-a63a6c75767c] Running
	I0815 17:07:33.557457   21063 system_pods.go:61] "kube-scheduler-addons-973562" [2aa94285-4622-46ad-a181-ed22ad8cbe17] Running
	I0815 17:07:33.557462   21063 system_pods.go:61] "metrics-server-8988944d9-2rpw7" [5ccb0984-23af-4380-b4e7-c266d3917b45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 17:07:33.557468   21063 system_pods.go:61] "nvidia-device-plugin-daemonset-9rkx2" [4d297fcf-2d70-4adb-b547-f8b1dbe59d7b] Running
	I0815 17:07:33.557474   21063 system_pods.go:61] "registry-6fb4cdfc84-svjjj" [c96c1884-ddbb-4955-b9b8-6c11e6a0e893] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0815 17:07:33.557481   21063 system_pods.go:61] "registry-proxy-mjdz8" [e4645394-eb8e-49e3-bab8-fb41e2aaebdf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0815 17:07:33.557490   21063 system_pods.go:61] "snapshot-controller-56fcc65765-9nhk7" [99bc41a8-780f-4b5e-aaec-4b90a782e8e6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 17:07:33.557497   21063 system_pods.go:61] "snapshot-controller-56fcc65765-wcf7d" [7152eb2d-aaf6-41a7-af66-dc316576c773] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 17:07:33.557503   21063 system_pods.go:61] "storage-provisioner" [c3a49d08-7c2e-4333-bde2-165983d8812b] Running
	I0815 17:07:33.557509   21063 system_pods.go:61] "tiller-deploy-b48cc5f79-4z6lg" [e1606621-5c24-447f-bc36-4b807d48e67a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0815 17:07:33.557516   21063 system_pods.go:74] duration metric: took 9.175841ms to wait for pod list to return data ...
	I0815 17:07:33.557522   21063 default_sa.go:34] waiting for default service account to be created ...
	I0815 17:07:33.559457   21063 default_sa.go:45] found service account: "default"
	I0815 17:07:33.559470   21063 default_sa.go:55] duration metric: took 1.940916ms for default service account to be created ...
	I0815 17:07:33.559476   21063 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 17:07:33.566682   21063 system_pods.go:86] 18 kube-system pods found
	I0815 17:07:33.566709   21063 system_pods.go:89] "coredns-6f6b679f8f-mpjgp" [a9818a08-6d11-41fe-81d9-afed636031df] Running
	I0815 17:07:33.566722   21063 system_pods.go:89] "csi-hostpath-attacher-0" [596b55e2-5cc7-4818-9e03-9e5bc52c081a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0815 17:07:33.566730   21063 system_pods.go:89] "csi-hostpath-resizer-0" [090d6c78-cb3b-44b5-b749-f53c4ec2fd5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0815 17:07:33.566742   21063 system_pods.go:89] "csi-hostpathplugin-csfg8" [0b7bd1d3-48f6-4f63-b5d1-bb152345a4f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0815 17:07:33.566752   21063 system_pods.go:89] "etcd-addons-973562" [27923b84-f63e-402c-b3b6-f21c39b7d672] Running
	I0815 17:07:33.566759   21063 system_pods.go:89] "kube-apiserver-addons-973562" [72f2bb55-2489-43d7-8831-425ddcab1c67] Running
	I0815 17:07:33.566766   21063 system_pods.go:89] "kube-controller-manager-addons-973562" [0f3e0bf9-94c4-4d47-8e4b-c3aacd43a567] Running
	I0815 17:07:33.566777   21063 system_pods.go:89] "kube-ingress-dns-minikube" [af9ffb5d-8172-478e-bf4f-ce5fafaba75b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0815 17:07:33.566781   21063 system_pods.go:89] "kube-proxy-9zjlq" [0ade0f95-ff6d-402e-8491-a63a6c75767c] Running
	I0815 17:07:33.566785   21063 system_pods.go:89] "kube-scheduler-addons-973562" [2aa94285-4622-46ad-a181-ed22ad8cbe17] Running
	I0815 17:07:33.566792   21063 system_pods.go:89] "metrics-server-8988944d9-2rpw7" [5ccb0984-23af-4380-b4e7-c266d3917b45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 17:07:33.566801   21063 system_pods.go:89] "nvidia-device-plugin-daemonset-9rkx2" [4d297fcf-2d70-4adb-b547-f8b1dbe59d7b] Running
	I0815 17:07:33.566810   21063 system_pods.go:89] "registry-6fb4cdfc84-svjjj" [c96c1884-ddbb-4955-b9b8-6c11e6a0e893] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0815 17:07:33.566821   21063 system_pods.go:89] "registry-proxy-mjdz8" [e4645394-eb8e-49e3-bab8-fb41e2aaebdf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0815 17:07:33.566832   21063 system_pods.go:89] "snapshot-controller-56fcc65765-9nhk7" [99bc41a8-780f-4b5e-aaec-4b90a782e8e6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 17:07:33.566845   21063 system_pods.go:89] "snapshot-controller-56fcc65765-wcf7d" [7152eb2d-aaf6-41a7-af66-dc316576c773] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 17:07:33.566851   21063 system_pods.go:89] "storage-provisioner" [c3a49d08-7c2e-4333-bde2-165983d8812b] Running
	I0815 17:07:33.566862   21063 system_pods.go:89] "tiller-deploy-b48cc5f79-4z6lg" [e1606621-5c24-447f-bc36-4b807d48e67a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0815 17:07:33.566870   21063 system_pods.go:126] duration metric: took 7.387465ms to wait for k8s-apps to be running ...
	I0815 17:07:33.566883   21063 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 17:07:33.566932   21063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:07:33.581827   21063 system_svc.go:56] duration metric: took 14.935668ms WaitForService to wait for kubelet
	I0815 17:07:33.581856   21063 kubeadm.go:582] duration metric: took 19.793392359s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:07:33.581874   21063 node_conditions.go:102] verifying NodePressure condition ...
	I0815 17:07:33.584624   21063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 17:07:33.584654   21063 node_conditions.go:123] node cpu capacity is 2
	I0815 17:07:33.584665   21063 node_conditions.go:105] duration metric: took 2.787137ms to run NodePressure ...
	I0815 17:07:33.584675   21063 start.go:241] waiting for startup goroutines ...
	I0815 17:07:33.665452   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:33.815843   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:33.999380   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:33.999608   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:34.165693   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:34.315006   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:34.499373   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:34.499618   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:34.667363   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:34.815957   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:34.999767   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:35.002138   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:35.166370   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:35.315048   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:35.500576   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:35.500720   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:35.666133   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:35.815309   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:36.000617   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:36.001360   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:36.165638   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:36.314651   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:36.500717   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:36.500831   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:36.665099   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:36.815438   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:37.002830   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:37.003134   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:37.166677   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:37.315904   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:37.499865   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:37.500173   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:37.665663   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:37.816227   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:38.000273   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:38.000974   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:38.165676   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:38.631984   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:38.632195   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:38.632926   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:38.665863   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:38.815651   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:39.000312   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:39.000704   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:39.167468   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:39.315762   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:39.501563   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:39.501698   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:39.666238   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:39.815417   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:40.000155   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:40.000774   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:40.165380   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:40.318534   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:40.501153   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:40.501494   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:40.665944   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:40.815252   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:41.000122   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:41.000344   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:41.169716   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:41.315261   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:41.500813   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:41.500890   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:41.665039   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:41.815397   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:42.000099   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:42.000227   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:42.166919   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:42.315320   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:42.500608   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:42.500874   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:42.667082   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:42.815675   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:43.001636   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:43.003021   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:43.170559   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:43.315865   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:43.501748   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:43.502177   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:43.666082   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:43.815557   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:44.000808   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:44.001071   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:44.166281   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:44.315687   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:44.499900   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:44.500574   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:44.666048   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:44.815108   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:45.001212   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:45.001474   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:45.167095   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:45.315713   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:45.500284   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:45.500823   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:45.666449   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:45.815773   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:45.999951   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:45.999962   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:46.165766   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:46.314776   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:46.499526   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:46.499710   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:46.666316   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:46.815689   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:47.001229   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:47.001904   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:47.166692   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:47.315772   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:47.500704   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:47.501875   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:47.666002   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:47.825700   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:48.002017   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:48.003103   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:48.165376   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:48.315693   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:48.502452   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:48.502895   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:48.665552   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:48.815422   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:49.000249   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:49.000972   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:49.166350   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:49.315482   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:49.500458   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:49.503695   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:49.666748   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:49.815489   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:49.999475   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:50.001081   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:50.165633   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:50.316181   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:50.500219   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:50.501475   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:50.666210   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:50.815638   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:51.000816   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:51.000884   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:51.165283   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:51.315660   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:51.500694   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:51.500992   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:51.665321   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:51.815655   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:52.000587   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:52.000679   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:52.166333   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:52.317715   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:52.500465   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:52.500923   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:52.666476   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:52.815822   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:53.001717   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:53.001961   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:53.305973   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:53.323308   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:53.499786   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:53.500331   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:53.666240   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:53.815731   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:54.000305   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:54.000593   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:54.166084   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:54.460270   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:54.500474   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:54.502046   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:54.666222   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:54.815203   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:54.999990   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:55.000714   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:55.168316   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:55.316126   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:55.502225   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:55.502778   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:55.668669   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:55.816703   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:56.000303   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:56.000640   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:56.166211   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:56.315490   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:56.500085   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:56.500622   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:56.666165   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:56.814860   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:57.000375   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:57.000387   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:57.166336   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:57.317231   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:57.500839   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:57.501286   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:57.665782   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:57.815148   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:57.999632   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:58.000214   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:58.166866   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:58.315876   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:58.500311   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:58.500476   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:58.665884   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:58.837603   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:59.001500   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:59.002320   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:59.166189   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:59.315374   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:59.500624   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:59.501502   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:59.666136   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:59.815813   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:00.002221   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:00.002372   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:00.165465   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:00.316161   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:00.499981   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:00.501004   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:00.667219   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:00.821503   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:01.000978   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:01.002983   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:01.166489   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:01.315348   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:01.505509   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:01.505845   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:01.665647   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:01.815695   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:02.001279   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:02.001687   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:02.165956   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:02.315213   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:02.500578   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:02.501438   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:02.666335   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:02.815461   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:03.001190   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:03.001530   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:03.166453   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:03.315612   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:03.500520   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:03.501307   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:03.665449   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:03.816018   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:04.000637   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:04.001236   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:04.165049   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:04.315527   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:04.501100   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:04.501280   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:04.666547   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:04.816463   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:05.000801   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:05.001745   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:05.166036   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:05.315018   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:05.500521   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:05.500638   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:05.664979   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:05.815054   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:06.000008   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:06.000294   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:06.167075   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:06.315587   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:06.500968   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:06.501083   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:06.665821   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:06.815159   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:07.000263   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:07.001020   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:07.166623   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:07.316128   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:07.501207   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:07.501282   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:07.666602   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:07.815974   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:08.000731   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:08.000959   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:08.165763   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:08.315379   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:08.501434   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:08.501914   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:08.666060   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:08.815594   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:09.000679   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:09.001702   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:09.165737   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:09.315513   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:09.500651   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:09.501377   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:09.666296   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:09.815411   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:10.138171   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:10.143269   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:10.241957   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:10.316285   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:10.500877   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:10.501180   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:10.665944   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:10.815569   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:11.000283   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:11.001524   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:11.167027   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:11.315495   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:11.500590   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:11.501360   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:11.786829   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:11.815069   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:12.000899   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:12.001057   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:12.165481   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:12.315917   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:12.500447   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:12.501019   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:12.666277   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:12.815745   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:12.999950   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:13.000790   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:13.166332   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:13.315391   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:13.500652   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:13.501245   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:13.665079   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:13.815516   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:14.000917   21063 kapi.go:107] duration metric: took 51.505280014s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 17:08:14.000992   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:14.166001   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:14.315031   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:14.499755   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:14.666132   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:14.816203   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:15.000860   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:15.165300   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:15.315326   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:15.500227   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:15.665556   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:15.816643   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:16.001281   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:16.165884   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:16.315535   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:16.500169   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:16.665610   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:16.815005   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:16.999450   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:17.166581   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:17.315475   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:17.500439   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:17.666976   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:18.033852   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:18.034249   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:18.166670   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:18.315487   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:18.500375   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:18.666536   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:18.815489   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:19.000261   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:19.165679   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:19.315021   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:19.500842   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:19.666657   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:19.815392   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:20.004464   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:20.166360   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:20.315952   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:20.501853   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:20.665533   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:20.815930   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:20.999713   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:21.166601   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:21.316422   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:21.500246   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:21.665856   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:21.815461   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:21.999814   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:22.166138   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:22.315563   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:22.500951   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:22.665173   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:22.815823   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:23.000474   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:23.166340   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:23.315859   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:23.499441   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:23.666692   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:23.815305   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:24.000647   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:24.167984   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:24.315134   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:24.499663   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:24.666315   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:24.815829   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:24.999818   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:25.165800   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:25.315354   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:25.499478   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:25.666064   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:25.815594   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:26.000725   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:26.167004   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:26.314993   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:26.500581   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:26.666415   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:26.815280   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:27.000225   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:27.165278   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:27.315578   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:27.499950   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:27.665617   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:27.816130   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:28.000706   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:28.166783   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:28.315625   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:28.500783   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:28.665525   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:28.815854   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:28.999512   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:29.166546   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:29.316139   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:29.500409   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:29.665745   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:29.814995   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:29.999973   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:30.165953   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:30.316291   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:30.500657   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:30.667129   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:30.816183   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:31.000064   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:31.165859   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:31.315313   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:31.500025   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:31.665727   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:31.814901   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:31.999550   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:32.166340   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:32.315710   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:32.500597   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:32.666238   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:32.815468   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:33.000186   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:33.165657   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:33.316673   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:33.500095   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:33.665893   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:33.815354   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:34.000146   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:34.166522   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:34.315930   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:34.499515   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:34.666191   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:34.816377   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:35.000133   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:35.165869   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:35.315606   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:35.500064   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:35.665751   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:35.815636   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:36.000525   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:36.166242   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:36.315580   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:36.500108   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:36.665258   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:36.815864   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:37.000607   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:37.165617   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:37.316410   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:37.500436   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:37.665934   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:37.815368   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:38.000035   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:38.165582   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:38.316069   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:38.499898   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:38.665337   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:38.815738   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:39.002024   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:39.165962   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:39.315585   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:39.500207   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:39.666242   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:39.815564   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:40.001286   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:40.166190   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:40.316531   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:40.500640   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:40.666177   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:40.815491   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:41.000469   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:41.166537   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:41.316935   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:41.500041   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:41.665428   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:41.815574   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:42.000956   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:42.165430   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:42.316198   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:42.500051   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:42.665713   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:42.815847   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:43.001030   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:43.181275   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:43.317125   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:43.500025   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:43.665709   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:43.815681   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:44.000352   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:44.166037   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:44.315511   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:44.500701   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:44.666441   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:44.816872   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:44.999732   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:45.169415   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:45.315673   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:45.500156   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:45.665860   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:45.816193   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:46.000102   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:46.165764   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:46.315166   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:46.499700   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:46.665831   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:46.815180   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:46.999680   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:47.166718   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:47.316279   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:47.500191   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:47.665706   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:47.816373   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:48.000331   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:48.167980   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:48.319012   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:48.507455   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:48.665819   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:48.815240   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:49.000063   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:49.165313   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:49.315308   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:49.500125   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:49.666668   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:49.815553   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:50.002363   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:50.165850   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:50.324008   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:50.500860   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:50.666616   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:50.815310   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:51.000307   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:51.165977   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:51.315507   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:51.500237   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:51.665790   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:51.815352   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:52.000457   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:52.165966   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:52.324200   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:52.500921   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:52.664954   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:52.815016   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:52.999756   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:53.169189   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:53.316071   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:53.499957   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:53.665930   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:53.815497   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:54.000162   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:54.165222   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:54.316091   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:54.500361   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:54.667071   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:54.816323   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:55.001105   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:55.165799   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:55.315749   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:55.500141   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:55.665748   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:55.816745   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:56.001291   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:56.167152   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:56.315865   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:56.510826   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:56.669783   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:56.815718   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:57.003786   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:57.164989   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:57.317763   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:57.500265   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:57.670388   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:57.816335   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:58.001772   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:58.167806   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:58.314966   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:58.501399   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:58.666094   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:58.815874   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:59.000217   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:59.166020   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:59.315832   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:59.500468   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:59.669975   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:59.816943   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:00.000388   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:00.168330   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:00.315258   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:00.501331   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:00.666396   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:00.817461   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:01.000806   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:01.166827   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:01.315501   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:01.500055   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:01.665833   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:01.814977   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:01.999873   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:02.166166   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:02.315738   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:02.653586   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:02.758409   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:02.855929   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:02.999980   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:03.165308   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:03.323118   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:03.500369   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:03.665756   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:03.816027   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:04.000155   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:04.166398   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:04.316909   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:04.500578   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:04.665834   21063 kapi.go:107] duration metric: took 1m41.504612749s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 17:09:04.815886   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:05.001503   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:05.316089   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:05.499996   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:05.815639   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:06.001044   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:06.315985   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:06.499733   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:06.816066   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:06.999525   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:07.315317   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:07.500033   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:07.817140   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:07.999771   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:08.315759   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:08.500796   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:08.816126   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:08.999612   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:09.315089   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:09.499981   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:09.816704   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:10.002326   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:10.315645   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:10.500022   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:10.815998   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:10.999510   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:11.315324   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:11.500208   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:11.815991   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:11.999602   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:12.315145   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:12.500678   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:12.815980   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:13.000165   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:13.315981   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:13.499921   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:13.815291   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:14.000117   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:14.315799   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:14.500621   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:14.815693   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:15.000636   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:15.315950   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:15.500283   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:15.816394   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:15.999955   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:16.320503   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:16.500803   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:16.816641   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:16.999918   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:17.316138   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:17.499808   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:17.816168   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:17.999868   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:18.315709   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:18.500113   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:18.816285   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:18.999986   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:19.316023   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:19.499683   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:19.815607   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:20.000451   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:20.315392   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:20.500441   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:20.815824   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:21.000381   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:21.314957   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:21.500054   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:21.816032   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:22.000437   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:22.316728   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:22.500595   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:22.815501   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:23.000767   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:23.315701   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:23.500217   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:23.815770   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:23.999781   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:24.315597   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:24.500880   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:24.815535   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:25.000548   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:25.315512   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:25.502897   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:25.815971   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:25.999661   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:26.315185   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:26.500202   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:26.816624   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:27.000611   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:27.315007   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:27.499837   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:27.815469   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:28.000365   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:28.315750   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:28.500242   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:28.815826   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:29.000740   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:29.315880   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:29.499891   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:29.815502   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:30.001418   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:30.315775   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:30.500292   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:30.816980   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:30.999970   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:31.315877   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:31.499779   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:31.815788   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:32.001026   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:32.315568   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:32.501217   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:32.816273   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:33.001049   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:33.315587   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:33.500811   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:33.815559   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:34.000015   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:34.316065   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:34.501283   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:34.815005   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:35.000627   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:35.316133   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:35.500221   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:35.816047   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:36.000426   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:36.315106   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:36.499901   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:36.815875   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:37.001260   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:37.316408   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:37.499986   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:37.816671   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:38.000695   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:38.315790   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:38.500541   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:38.815007   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:38.999561   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:39.314809   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:39.500619   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:39.815873   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:40.000471   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:40.321120   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:40.500426   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:40.814985   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:41.000153   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:41.315813   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:41.501656   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:41.816363   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:42.001385   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:42.315655   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:42.501191   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:42.815887   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:43.001619   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:43.316367   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:43.500451   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:43.816230   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:44.000429   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:44.315825   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:44.500462   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:44.815935   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:45.001582   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:45.316166   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:45.501057   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:45.815264   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:46.005795   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:46.315080   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:46.499931   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:46.816746   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:47.002923   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:47.316447   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:47.500823   21063 kapi.go:107] duration metric: took 2m25.005103671s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 17:09:47.815289   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:48.318066   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:48.815315   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:49.317556   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:49.815707   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:50.316583   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:50.816623   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:51.448063   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:51.815826   21063 kapi.go:107] duration metric: took 2m26.503899498s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 17:09:51.817643   21063 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-973562 cluster.
	I0815 17:09:51.819034   21063 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 17:09:51.820498   21063 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 17:09:51.821964   21063 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, helm-tiller, cloud-spanner, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0815 17:09:51.823156   21063 addons.go:510] duration metric: took 2m38.03464767s for enable addons: enabled=[storage-provisioner nvidia-device-plugin ingress-dns metrics-server helm-tiller cloud-spanner inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0815 17:09:51.823188   21063 start.go:246] waiting for cluster config update ...
	I0815 17:09:51.823204   21063 start.go:255] writing updated cluster config ...
	I0815 17:09:51.823493   21063 ssh_runner.go:195] Run: rm -f paused
	I0815 17:09:51.877018   21063 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 17:09:51.878700   21063 out.go:177] * Done! kubectl is now configured to use "addons-973562" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.017661124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742076017636464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59bd06f6-d5c8-4bae-9834-af269e05de19 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.018428238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c63e5e0-206d-4637-827f-9f2fa8f0cc40 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.018483828Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c63e5e0-206d-4637-827f-9f2fa8f0cc40 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.018974335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1787e89abb0afeee25c502bc3195e1f7f75942feeaa3b35ff1b3d7f52491058,PodSandboxId:7a1b069065ecad7a63f983efc81506ee2ed8fee5b8af6f86592048ffb906c92c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723742069249252355,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wzp2w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2579b064-be76-41aa-8fd9-ea64aefd8eed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ca963650cd41c4580baf1d6e5d117b855a65c62a34882224efc66db4d9bca0,PodSandboxId:96582e0891be7fe9f967fb5f630132ed5c3ffc44a13d842c5c1ec1631c1e574d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723741927898142792,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f83b1404-c3f9-436f-a4fa-c82dd8ac7b90,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f3d62a377a2f503bec0f3018f57156716152b2b29b1ea99afcb3e6749e528,PodSandboxId:f9f8a5263d0c516e86b6b84024918cb08097232e67a476f79e9dbea80c14ae57,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723741924365921852,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lt6rm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7838ea9e-895e-43bc-8be4-9f0d98616812,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac471d524493b542fb2b5a7f3d5d454c624dcc13df7c946cac15801b10cce2b0,PodSandboxId:f0de5f43bd64b4cc01a29d401fb8229c95375fb560e5a42fab13352ee972982e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723741795552712802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1f14268-bbdd-4b42-8d28-16db4e873bcd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14df6b9f2d522a8c6672fde288554b6b5acfb90ef76f010640bcb8699b012efe,PodSandboxId:bf71e8d1239cde3a013e0faca9ba0c7183ba3207ff0c237a205a279f4b8eb871,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723741747795973190,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k86fb,io.kub
ernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7bb82555-c651-4039-94d8-fb7194aeb71a,},Annotations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b450c80c64e3e270d0161bfc989cc9ce771cff09014e26bfc808a21db4d5d,PodSandboxId:ffce20d91fbe37d3f38b7902cd4fbf3b19782182243c315b699c94f2c56248a3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723741733474231508,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingre
ss-nginx-admission-create-kxhj7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3c970983-0af8-4cd3-8916-4dd1ca4e5933,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c36779dc514bd45e38edcb986fa83a4e6587d65d56edbd718ba93bf975c332,PodSandboxId:336e0b99d7ea11132ef6fbfbc9f451460ecc49a82dac7d859bb7342a38ff8e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723741680671397099,Labels:map[string]string{io.kubernetes.container.
name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-2rpw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccb0984-23af-4380-b4e7-c266d3917b45,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a08fe240691c0b9be06b8345c98eef027070426d147fe5fd30b808dd98b725e,PodSandboxId:a2282c836ecde27237e2d5e8607ba405392b28ee4bdb446ea8b9c4bfaca33b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_RUNNING,CreatedAt:1723741640034562028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a49d08-7c2e-4333-bde2-165983d8812b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7dabbdd78f5269bc7a8bf4704238d900673e48be099540c6f354d1996d7171,PodSandboxId:803fe16e005170063be38728d5ba3d1bc4abbc2ec159fdf8ad2f85626313f447,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1
723741637306610897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpjgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9818a08-6d11-41fe-81d9-afed636031df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76f57abbb47d25a09ada3d4c9c62d3c1f077dba5cc3555a0c7ee9cdb80b5afe,PodSandboxId:ae8398b86515edfade40518c0c7bdba9416748d390628c63f6ce07f7a1f6ef2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f9
41fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723741634311918570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9zjlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ade0f95-ff6d-402e-8491-a63a6c75767c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fea5c45564c68d4b07a79b4775bf842c64dccfbaf01ed850f2a4c7738c6dd9,PodSandboxId:23ad700d62d0a35d36e49350914eaf004bde64691830dc072e8fc764b628cc5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723741623573557719,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e1d7dada6ca365bad346bd612c6c16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3534ecea3b438bf44120ffd8e4e6dc0eefcc0893a5383863ad0ddbd1353953b2,PodSandboxId:4a169f6964f2c8a53a64ce41d999fb5f9aa724f42778013e17331a240c0960d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723741623565674642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7467ad3bb5426d6fd74483911510fb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3128463831d397650135e24543103e466ce2084eee25be91527b03cd11840c97,PodSandboxId:0a7f6e41dc2b21c3b96753c55c9fdc5182e28f54248a748f2dd0662172f33c00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723741623531134743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800eb95466b0525df544e39951ff83ce,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a83258a356d612266f08b866760653c18e1329481cde8025bb1b49412a4784f,PodSandboxId:ebe45e82f021e2ccdbeff23200e9beb031aa6b5eb7d66a86a7503f778a75e650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0457
33566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723741623473873857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 981e4757113ba2796f2c06755ba75895,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c63e5e0-206d-4637-827f-9f2fa8f0cc40 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.058862881Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f951df30-7858-477e-8c16-5fa42ef074fb name=/runtime.v1.RuntimeService/Version
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.058935445Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f951df30-7858-477e-8c16-5fa42ef074fb name=/runtime.v1.RuntimeService/Version
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.060519267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca3b7761-a431-4bc6-bcc4-7941f9f31390 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.061996624Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742076061969860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca3b7761-a431-4bc6-bcc4-7941f9f31390 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.062762761Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=154c30ae-fff4-4a2e-8375-756dbba081c3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.062815563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=154c30ae-fff4-4a2e-8375-756dbba081c3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.063117555Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1787e89abb0afeee25c502bc3195e1f7f75942feeaa3b35ff1b3d7f52491058,PodSandboxId:7a1b069065ecad7a63f983efc81506ee2ed8fee5b8af6f86592048ffb906c92c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723742069249252355,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wzp2w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2579b064-be76-41aa-8fd9-ea64aefd8eed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ca963650cd41c4580baf1d6e5d117b855a65c62a34882224efc66db4d9bca0,PodSandboxId:96582e0891be7fe9f967fb5f630132ed5c3ffc44a13d842c5c1ec1631c1e574d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723741927898142792,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f83b1404-c3f9-436f-a4fa-c82dd8ac7b90,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f3d62a377a2f503bec0f3018f57156716152b2b29b1ea99afcb3e6749e528,PodSandboxId:f9f8a5263d0c516e86b6b84024918cb08097232e67a476f79e9dbea80c14ae57,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723741924365921852,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lt6rm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7838ea9e-895e-43bc-8be4-9f0d98616812,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac471d524493b542fb2b5a7f3d5d454c624dcc13df7c946cac15801b10cce2b0,PodSandboxId:f0de5f43bd64b4cc01a29d401fb8229c95375fb560e5a42fab13352ee972982e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723741795552712802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1f14268-bbdd-4b42-8d28-16db4e873bcd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14df6b9f2d522a8c6672fde288554b6b5acfb90ef76f010640bcb8699b012efe,PodSandboxId:bf71e8d1239cde3a013e0faca9ba0c7183ba3207ff0c237a205a279f4b8eb871,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723741747795973190,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k86fb,io.kub
ernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7bb82555-c651-4039-94d8-fb7194aeb71a,},Annotations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b450c80c64e3e270d0161bfc989cc9ce771cff09014e26bfc808a21db4d5d,PodSandboxId:ffce20d91fbe37d3f38b7902cd4fbf3b19782182243c315b699c94f2c56248a3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723741733474231508,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingre
ss-nginx-admission-create-kxhj7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3c970983-0af8-4cd3-8916-4dd1ca4e5933,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c36779dc514bd45e38edcb986fa83a4e6587d65d56edbd718ba93bf975c332,PodSandboxId:336e0b99d7ea11132ef6fbfbc9f451460ecc49a82dac7d859bb7342a38ff8e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723741680671397099,Labels:map[string]string{io.kubernetes.container.
name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-2rpw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccb0984-23af-4380-b4e7-c266d3917b45,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a08fe240691c0b9be06b8345c98eef027070426d147fe5fd30b808dd98b725e,PodSandboxId:a2282c836ecde27237e2d5e8607ba405392b28ee4bdb446ea8b9c4bfaca33b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_RUNNING,CreatedAt:1723741640034562028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a49d08-7c2e-4333-bde2-165983d8812b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7dabbdd78f5269bc7a8bf4704238d900673e48be099540c6f354d1996d7171,PodSandboxId:803fe16e005170063be38728d5ba3d1bc4abbc2ec159fdf8ad2f85626313f447,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1
723741637306610897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpjgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9818a08-6d11-41fe-81d9-afed636031df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76f57abbb47d25a09ada3d4c9c62d3c1f077dba5cc3555a0c7ee9cdb80b5afe,PodSandboxId:ae8398b86515edfade40518c0c7bdba9416748d390628c63f6ce07f7a1f6ef2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f9
41fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723741634311918570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9zjlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ade0f95-ff6d-402e-8491-a63a6c75767c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fea5c45564c68d4b07a79b4775bf842c64dccfbaf01ed850f2a4c7738c6dd9,PodSandboxId:23ad700d62d0a35d36e49350914eaf004bde64691830dc072e8fc764b628cc5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723741623573557719,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e1d7dada6ca365bad346bd612c6c16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3534ecea3b438bf44120ffd8e4e6dc0eefcc0893a5383863ad0ddbd1353953b2,PodSandboxId:4a169f6964f2c8a53a64ce41d999fb5f9aa724f42778013e17331a240c0960d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723741623565674642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7467ad3bb5426d6fd74483911510fb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3128463831d397650135e24543103e466ce2084eee25be91527b03cd11840c97,PodSandboxId:0a7f6e41dc2b21c3b96753c55c9fdc5182e28f54248a748f2dd0662172f33c00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723741623531134743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800eb95466b0525df544e39951ff83ce,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a83258a356d612266f08b866760653c18e1329481cde8025bb1b49412a4784f,PodSandboxId:ebe45e82f021e2ccdbeff23200e9beb031aa6b5eb7d66a86a7503f778a75e650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0457
33566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723741623473873857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 981e4757113ba2796f2c06755ba75895,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=154c30ae-fff4-4a2e-8375-756dbba081c3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.100702700Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=19804efc-1605-4e15-9a1e-e0c84cb8ea4e name=/runtime.v1.RuntimeService/Version
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.100780054Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=19804efc-1605-4e15-9a1e-e0c84cb8ea4e name=/runtime.v1.RuntimeService/Version
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.102072382Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fa763aa-f083-482a-b921-ecd85be17711 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.103636574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742076103610620,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fa763aa-f083-482a-b921-ecd85be17711 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.104229357Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b958f933-e936-4902-a1bc-c218251d1251 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.104284843Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b958f933-e936-4902-a1bc-c218251d1251 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.104658111Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1787e89abb0afeee25c502bc3195e1f7f75942feeaa3b35ff1b3d7f52491058,PodSandboxId:7a1b069065ecad7a63f983efc81506ee2ed8fee5b8af6f86592048ffb906c92c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723742069249252355,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wzp2w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2579b064-be76-41aa-8fd9-ea64aefd8eed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ca963650cd41c4580baf1d6e5d117b855a65c62a34882224efc66db4d9bca0,PodSandboxId:96582e0891be7fe9f967fb5f630132ed5c3ffc44a13d842c5c1ec1631c1e574d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723741927898142792,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f83b1404-c3f9-436f-a4fa-c82dd8ac7b90,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f3d62a377a2f503bec0f3018f57156716152b2b29b1ea99afcb3e6749e528,PodSandboxId:f9f8a5263d0c516e86b6b84024918cb08097232e67a476f79e9dbea80c14ae57,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723741924365921852,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lt6rm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7838ea9e-895e-43bc-8be4-9f0d98616812,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac471d524493b542fb2b5a7f3d5d454c624dcc13df7c946cac15801b10cce2b0,PodSandboxId:f0de5f43bd64b4cc01a29d401fb8229c95375fb560e5a42fab13352ee972982e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723741795552712802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1f14268-bbdd-4b42-8d28-16db4e873bcd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14df6b9f2d522a8c6672fde288554b6b5acfb90ef76f010640bcb8699b012efe,PodSandboxId:bf71e8d1239cde3a013e0faca9ba0c7183ba3207ff0c237a205a279f4b8eb871,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723741747795973190,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k86fb,io.kub
ernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7bb82555-c651-4039-94d8-fb7194aeb71a,},Annotations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b450c80c64e3e270d0161bfc989cc9ce771cff09014e26bfc808a21db4d5d,PodSandboxId:ffce20d91fbe37d3f38b7902cd4fbf3b19782182243c315b699c94f2c56248a3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723741733474231508,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingre
ss-nginx-admission-create-kxhj7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3c970983-0af8-4cd3-8916-4dd1ca4e5933,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c36779dc514bd45e38edcb986fa83a4e6587d65d56edbd718ba93bf975c332,PodSandboxId:336e0b99d7ea11132ef6fbfbc9f451460ecc49a82dac7d859bb7342a38ff8e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723741680671397099,Labels:map[string]string{io.kubernetes.container.
name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-2rpw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccb0984-23af-4380-b4e7-c266d3917b45,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a08fe240691c0b9be06b8345c98eef027070426d147fe5fd30b808dd98b725e,PodSandboxId:a2282c836ecde27237e2d5e8607ba405392b28ee4bdb446ea8b9c4bfaca33b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_RUNNING,CreatedAt:1723741640034562028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a49d08-7c2e-4333-bde2-165983d8812b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7dabbdd78f5269bc7a8bf4704238d900673e48be099540c6f354d1996d7171,PodSandboxId:803fe16e005170063be38728d5ba3d1bc4abbc2ec159fdf8ad2f85626313f447,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1
723741637306610897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpjgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9818a08-6d11-41fe-81d9-afed636031df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76f57abbb47d25a09ada3d4c9c62d3c1f077dba5cc3555a0c7ee9cdb80b5afe,PodSandboxId:ae8398b86515edfade40518c0c7bdba9416748d390628c63f6ce07f7a1f6ef2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f9
41fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723741634311918570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9zjlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ade0f95-ff6d-402e-8491-a63a6c75767c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fea5c45564c68d4b07a79b4775bf842c64dccfbaf01ed850f2a4c7738c6dd9,PodSandboxId:23ad700d62d0a35d36e49350914eaf004bde64691830dc072e8fc764b628cc5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723741623573557719,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e1d7dada6ca365bad346bd612c6c16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3534ecea3b438bf44120ffd8e4e6dc0eefcc0893a5383863ad0ddbd1353953b2,PodSandboxId:4a169f6964f2c8a53a64ce41d999fb5f9aa724f42778013e17331a240c0960d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723741623565674642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7467ad3bb5426d6fd74483911510fb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3128463831d397650135e24543103e466ce2084eee25be91527b03cd11840c97,PodSandboxId:0a7f6e41dc2b21c3b96753c55c9fdc5182e28f54248a748f2dd0662172f33c00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723741623531134743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800eb95466b0525df544e39951ff83ce,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a83258a356d612266f08b866760653c18e1329481cde8025bb1b49412a4784f,PodSandboxId:ebe45e82f021e2ccdbeff23200e9beb031aa6b5eb7d66a86a7503f778a75e650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0457
33566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723741623473873857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 981e4757113ba2796f2c06755ba75895,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b958f933-e936-4902-a1bc-c218251d1251 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.144013740Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a38b4e7-4473-444a-80f8-ffb6f540aa11 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.144085544Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a38b4e7-4473-444a-80f8-ffb6f540aa11 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.145218381Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c2d701e-b379-4972-874b-712b108e79e4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.146508177Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742076146473006,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c2d701e-b379-4972-874b-712b108e79e4 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.147264370Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c0c6f83-36e9-4b3e-8df6-2ead2cc736a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.147371778Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c0c6f83-36e9-4b3e-8df6-2ead2cc736a6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:14:36 addons-973562 crio[685]: time="2024-08-15 17:14:36.147677713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1787e89abb0afeee25c502bc3195e1f7f75942feeaa3b35ff1b3d7f52491058,PodSandboxId:7a1b069065ecad7a63f983efc81506ee2ed8fee5b8af6f86592048ffb906c92c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723742069249252355,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wzp2w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2579b064-be76-41aa-8fd9-ea64aefd8eed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ca963650cd41c4580baf1d6e5d117b855a65c62a34882224efc66db4d9bca0,PodSandboxId:96582e0891be7fe9f967fb5f630132ed5c3ffc44a13d842c5c1ec1631c1e574d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723741927898142792,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f83b1404-c3f9-436f-a4fa-c82dd8ac7b90,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f3d62a377a2f503bec0f3018f57156716152b2b29b1ea99afcb3e6749e528,PodSandboxId:f9f8a5263d0c516e86b6b84024918cb08097232e67a476f79e9dbea80c14ae57,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723741924365921852,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lt6rm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7838ea9e-895e-43bc-8be4-9f0d98616812,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac471d524493b542fb2b5a7f3d5d454c624dcc13df7c946cac15801b10cce2b0,PodSandboxId:f0de5f43bd64b4cc01a29d401fb8229c95375fb560e5a42fab13352ee972982e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723741795552712802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1f14268-bbdd-4b42-8d28-16db4e873bcd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14df6b9f2d522a8c6672fde288554b6b5acfb90ef76f010640bcb8699b012efe,PodSandboxId:bf71e8d1239cde3a013e0faca9ba0c7183ba3207ff0c237a205a279f4b8eb871,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723741747795973190,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k86fb,io.kub
ernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 7bb82555-c651-4039-94d8-fb7194aeb71a,},Annotations:map[string]string{io.kubernetes.container.hash: 8e23eadd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f03b450c80c64e3e270d0161bfc989cc9ce771cff09014e26bfc808a21db4d5d,PodSandboxId:ffce20d91fbe37d3f38b7902cd4fbf3b19782182243c315b699c94f2c56248a3,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66,State:CONTAINER_EXITED,CreatedAt:1723741733474231508,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingre
ss-nginx-admission-create-kxhj7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 3c970983-0af8-4cd3-8916-4dd1ca4e5933,},Annotations:map[string]string{io.kubernetes.container.hash: 7b54fe70,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c36779dc514bd45e38edcb986fa83a4e6587d65d56edbd718ba93bf975c332,PodSandboxId:336e0b99d7ea11132ef6fbfbc9f451460ecc49a82dac7d859bb7342a38ff8e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723741680671397099,Labels:map[string]string{io.kubernetes.container.
name: metrics-server,io.kubernetes.pod.name: metrics-server-8988944d9-2rpw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccb0984-23af-4380-b4e7-c266d3917b45,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a08fe240691c0b9be06b8345c98eef027070426d147fe5fd30b808dd98b725e,PodSandboxId:a2282c836ecde27237e2d5e8607ba405392b28ee4bdb446ea8b9c4bfaca33b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTA
INER_RUNNING,CreatedAt:1723741640034562028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a49d08-7c2e-4333-bde2-165983d8812b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7dabbdd78f5269bc7a8bf4704238d900673e48be099540c6f354d1996d7171,PodSandboxId:803fe16e005170063be38728d5ba3d1bc4abbc2ec159fdf8ad2f85626313f447,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1
723741637306610897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpjgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9818a08-6d11-41fe-81d9-afed636031df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76f57abbb47d25a09ada3d4c9c62d3c1f077dba5cc3555a0c7ee9cdb80b5afe,PodSandboxId:ae8398b86515edfade40518c0c7bdba9416748d390628c63f6ce07f7a1f6ef2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f9
41fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723741634311918570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9zjlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ade0f95-ff6d-402e-8491-a63a6c75767c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fea5c45564c68d4b07a79b4775bf842c64dccfbaf01ed850f2a4c7738c6dd9,PodSandboxId:23ad700d62d0a35d36e49350914eaf004bde64691830dc072e8fc764b628cc5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[strin
g]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723741623573557719,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e1d7dada6ca365bad346bd612c6c16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3534ecea3b438bf44120ffd8e4e6dc0eefcc0893a5383863ad0ddbd1353953b2,PodSandboxId:4a169f6964f2c8a53a64ce41d999fb5f9aa724f42778013e17331a240c0960d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723741623565674642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7467ad3bb5426d6fd74483911510fb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3128463831d397650135e24543103e466ce2084eee25be91527b03cd11840c97,PodSandboxId:0a7f6e41dc2b21c3b96753c55c9fdc5182e28f54248a748f2dd0662172f33c00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Image
Ref:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723741623531134743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800eb95466b0525df544e39951ff83ce,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a83258a356d612266f08b866760653c18e1329481cde8025bb1b49412a4784f,PodSandboxId:ebe45e82f021e2ccdbeff23200e9beb031aa6b5eb7d66a86a7503f778a75e650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0457
33566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723741623473873857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 981e4757113ba2796f2c06755ba75895,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c0c6f83-36e9-4b3e-8df6-2ead2cc736a6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1787e89abb0a       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        6 seconds ago       Running             hello-world-app           0                   7a1b069065eca       hello-world-app-55bf9c44b4-wzp2w
	f2ca963650cd4       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                              2 minutes ago       Running             nginx                     0                   96582e0891be7       nginx
	8a5f3d62a377a       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                        2 minutes ago       Running             headlamp                  0                   f9f8a5263d0c5       headlamp-57fb76fcdb-lt6rm
	ac471d524493b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago       Running             busybox                   0                   f0de5f43bd64b       busybox
	14df6b9f2d522       684c5ea3b61b299cd4e713c10bfd8989341da91f6175e2e6e502869c0781fb66                                                             5 minutes ago       Exited              patch                     2                   bf71e8d1239cd       ingress-nginx-admission-patch-k86fb
	f03b450c80c64       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:35379defc3e7025b1c00d37092f560ce87d06ea5ab35d04ff8a0cf22d316bcf2   5 minutes ago       Exited              create                    0                   ffce20d91fbe3       ingress-nginx-admission-create-kxhj7
	26c36779dc514       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872        6 minutes ago       Running             metrics-server            0                   336e0b99d7ea1       metrics-server-8988944d9-2rpw7
	7a08fe240691c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             7 minutes ago       Running             storage-provisioner       0                   a2282c836ecde       storage-provisioner
	8c7dabbdd78f5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                             7 minutes ago       Running             coredns                   0                   803fe16e00517       coredns-6f6b679f8f-mpjgp
	b76f57abbb47d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                             7 minutes ago       Running             kube-proxy                0                   ae8398b86515e       kube-proxy-9zjlq
	80fea5c45564c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             7 minutes ago       Running             etcd                      0                   23ad700d62d0a       etcd-addons-973562
	3534ecea3b438       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                             7 minutes ago       Running             kube-apiserver            0                   4a169f6964f2c       kube-apiserver-addons-973562
	3128463831d39       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                             7 minutes ago       Running             kube-scheduler            0                   0a7f6e41dc2b2       kube-scheduler-addons-973562
	3a83258a356d6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                             7 minutes ago       Running             kube-controller-manager   0                   ebe45e82f021e       kube-controller-manager-addons-973562
	
	
	==> coredns [8c7dabbdd78f5269bc7a8bf4704238d900673e48be099540c6f354d1996d7171] <==
	[INFO] 10.244.0.7:53290 - 8087 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000302511s
	[INFO] 10.244.0.7:34213 - 6733 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114509s
	[INFO] 10.244.0.7:34213 - 27465 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000136584s
	[INFO] 10.244.0.7:44234 - 59051 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000145018s
	[INFO] 10.244.0.7:44234 - 6052 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105852s
	[INFO] 10.244.0.7:37848 - 21755 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000128264s
	[INFO] 10.244.0.7:37848 - 62713 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000189252s
	[INFO] 10.244.0.7:42584 - 64233 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075884s
	[INFO] 10.244.0.7:42584 - 17133 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000043453s
	[INFO] 10.244.0.7:37081 - 7420 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000044415s
	[INFO] 10.244.0.7:37081 - 42995 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000041904s
	[INFO] 10.244.0.7:57302 - 60628 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035068s
	[INFO] 10.244.0.7:57302 - 36566 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000031008s
	[INFO] 10.244.0.7:38674 - 13771 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000028824s
	[INFO] 10.244.0.7:38674 - 18389 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000051632s
	[INFO] 10.244.0.22:50305 - 53594 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000411796s
	[INFO] 10.244.0.22:33601 - 28784 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000082061s
	[INFO] 10.244.0.22:47222 - 45431 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012319s
	[INFO] 10.244.0.22:45471 - 59317 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000055836s
	[INFO] 10.244.0.22:57069 - 53768 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063639s
	[INFO] 10.244.0.22:57346 - 54554 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000112021s
	[INFO] 10.244.0.22:47088 - 52858 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000838726s
	[INFO] 10.244.0.22:50023 - 43526 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000480694s
	[INFO] 10.244.0.26:42469 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000348585s
	[INFO] 10.244.0.26:60115 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000196037s
	
	
	==> describe nodes <==
	Name:               addons-973562
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-973562
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=addons-973562
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T17_07_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-973562
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:07:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-973562
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:14:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:12:15 +0000   Thu, 15 Aug 2024 17:07:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:12:15 +0000   Thu, 15 Aug 2024 17:07:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:12:15 +0000   Thu, 15 Aug 2024 17:07:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:12:15 +0000   Thu, 15 Aug 2024 17:07:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    addons-973562
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 cdb6e2a853b14fdda051e6504cd494ec
	  System UUID:                cdb6e2a8-53b1-4fdd-a051-e6504cd494ec
	  Boot ID:                    6b438358-870b-4061-a65c-37cfc5f1b5de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  default                     hello-world-app-55bf9c44b4-wzp2w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  headlamp                    headlamp-57fb76fcdb-lt6rm                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 coredns-6f6b679f8f-mpjgp                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m22s
	  kube-system                 etcd-addons-973562                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m28s
	  kube-system                 kube-apiserver-addons-973562             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-controller-manager-addons-973562    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 kube-proxy-9zjlq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m23s
	  kube-system                 kube-scheduler-addons-973562             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m28s
	  kube-system                 metrics-server-8988944d9-2rpw7           100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m17s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m21s  kube-proxy       
	  Normal  Starting                 7m28s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m28s  kubelet          Node addons-973562 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m28s  kubelet          Node addons-973562 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m28s  kubelet          Node addons-973562 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m26s  kubelet          Node addons-973562 status is now: NodeReady
	  Normal  RegisteredNode           7m23s  node-controller  Node addons-973562 event: Registered Node addons-973562 in Controller
	
	
	==> dmesg <==
	[  +0.008138] systemd-fstab-generator[1371]: Ignoring "noauto" option for root device
	[  +5.000153] kauditd_printk_skb: 104 callbacks suppressed
	[  +5.023955] kauditd_printk_skb: 167 callbacks suppressed
	[  +8.582623] kauditd_printk_skb: 53 callbacks suppressed
	[Aug15 17:08] kauditd_printk_skb: 34 callbacks suppressed
	[ +48.881489] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.817830] kauditd_printk_skb: 16 callbacks suppressed
	[Aug15 17:09] kauditd_printk_skb: 83 callbacks suppressed
	[  +7.104704] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.518849] kauditd_printk_skb: 6 callbacks suppressed
	[ +23.459117] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.007597] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.588306] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.187096] kauditd_printk_skb: 47 callbacks suppressed
	[Aug15 17:10] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.077073] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.050695] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.048491] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.232311] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.091737] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.198622] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.369628] kauditd_printk_skb: 41 callbacks suppressed
	[Aug15 17:12] kauditd_printk_skb: 46 callbacks suppressed
	[Aug15 17:14] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.087244] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [80fea5c45564c68d4b07a79b4775bf842c64dccfbaf01ed850f2a4c7738c6dd9] <==
	{"level":"info","ts":"2024-08-15T17:08:18.022011Z","caller":"traceutil/trace.go:171","msg":"trace[659034095] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:957; }","duration":"205.760586ms","start":"2024-08-15T17:08:17.816244Z","end":"2024-08-15T17:08:18.022005Z","steps":["trace[659034095] 'agreement among raft nodes before linearized reading'  (duration: 205.667408ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:08:18.022195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.341643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:08:18.022233Z","caller":"traceutil/trace.go:171","msg":"trace[778957565] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:957; }","duration":"138.380112ms","start":"2024-08-15T17:08:17.883847Z","end":"2024-08-15T17:08:18.022227Z","steps":["trace[778957565] 'agreement among raft nodes before linearized reading'  (duration: 138.333227ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:09:02.637718Z","caller":"traceutil/trace.go:171","msg":"trace[1479075003] linearizableReadLoop","detail":"{readStateIndex:1171; appliedIndex:1170; }","duration":"222.710195ms","start":"2024-08-15T17:09:02.414979Z","end":"2024-08-15T17:09:02.637689Z","steps":["trace[1479075003] 'read index received'  (duration: 222.542398ms)","trace[1479075003] 'applied index is now lower than readState.Index'  (duration: 167.125µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T17:09:02.637860Z","caller":"traceutil/trace.go:171","msg":"trace[263805521] transaction","detail":"{read_only:false; response_revision:1130; number_of_response:1; }","duration":"321.65567ms","start":"2024-08-15T17:09:02.316172Z","end":"2024-08-15T17:09:02.637828Z","steps":["trace[263805521] 'process raft request'  (duration: 321.397523ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:09:02.637964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.96806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-08-15T17:09:02.637991Z","caller":"traceutil/trace.go:171","msg":"trace[956218439] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1130; }","duration":"223.009994ms","start":"2024-08-15T17:09:02.414975Z","end":"2024-08-15T17:09:02.637985Z","steps":["trace[956218439] 'agreement among raft nodes before linearized reading'  (duration: 222.911213ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:09:02.637998Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T17:09:02.316159Z","time spent":"321.743418ms","remote":"127.0.0.1:52432","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1121 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-15T17:09:02.638131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.37401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:09:02.638146Z","caller":"traceutil/trace.go:171","msg":"trace[619905049] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1130; }","duration":"151.390267ms","start":"2024-08-15T17:09:02.486751Z","end":"2024-08-15T17:09:02.638142Z","steps":["trace[619905049] 'agreement among raft nodes before linearized reading'  (duration: 151.362195ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:09:51.407826Z","caller":"traceutil/trace.go:171","msg":"trace[1285894460] linearizableReadLoop","detail":"{readStateIndex:1321; appliedIndex:1320; }","duration":"106.389545ms","start":"2024-08-15T17:09:51.301423Z","end":"2024-08-15T17:09:51.407813Z","steps":["trace[1285894460] 'read index received'  (duration: 105.786286ms)","trace[1285894460] 'applied index is now lower than readState.Index'  (duration: 602.567µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T17:09:51.408285Z","caller":"traceutil/trace.go:171","msg":"trace[219595597] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"114.744101ms","start":"2024-08-15T17:09:51.293530Z","end":"2024-08-15T17:09:51.408274Z","steps":["trace[219595597] 'process raft request'  (duration: 113.717188ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:09:51.408816Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.377482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:09:51.408932Z","caller":"traceutil/trace.go:171","msg":"trace[1075537955] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1269; }","duration":"107.504691ms","start":"2024-08-15T17:09:51.301419Z","end":"2024-08-15T17:09:51.408924Z","steps":["trace[1075537955] 'agreement among raft nodes before linearized reading'  (duration: 107.278909ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:10:35.502719Z","caller":"traceutil/trace.go:171","msg":"trace[1769090326] linearizableReadLoop","detail":"{readStateIndex:1638; appliedIndex:1637; }","duration":"279.676559ms","start":"2024-08-15T17:10:35.223013Z","end":"2024-08-15T17:10:35.502690Z","steps":["trace[1769090326] 'read index received'  (duration: 279.206377ms)","trace[1769090326] 'applied index is now lower than readState.Index'  (duration: 469.696µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T17:10:35.504699Z","caller":"traceutil/trace.go:171","msg":"trace[5475948] transaction","detail":"{read_only:false; response_revision:1569; number_of_response:1; }","duration":"309.991886ms","start":"2024-08-15T17:10:35.194689Z","end":"2024-08-15T17:10:35.504681Z","steps":["trace[5475948] 'process raft request'  (duration: 307.789053ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:10:35.505675Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"282.631607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-08-15T17:10:35.506459Z","caller":"traceutil/trace.go:171","msg":"trace[1778051888] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1569; }","duration":"283.439651ms","start":"2024-08-15T17:10:35.223010Z","end":"2024-08-15T17:10:35.506449Z","steps":["trace[1778051888] 'agreement among raft nodes before linearized reading'  (duration: 282.567612ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:10:35.507228Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.234927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:10:35.507875Z","caller":"traceutil/trace.go:171","msg":"trace[1556534310] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1569; }","duration":"267.890328ms","start":"2024-08-15T17:10:35.239974Z","end":"2024-08-15T17:10:35.507864Z","steps":["trace[1556534310] 'agreement among raft nodes before linearized reading'  (duration: 267.147398ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:10:35.505963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.167192ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:10:35.508616Z","caller":"traceutil/trace.go:171","msg":"trace[1116694190] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1569; }","duration":"282.819995ms","start":"2024-08-15T17:10:35.225788Z","end":"2024-08-15T17:10:35.508608Z","steps":["trace[1116694190] 'agreement among raft nodes before linearized reading'  (duration: 280.154463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:10:35.505857Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T17:10:35.194671Z","time spent":"310.725508ms","remote":"127.0.0.1:52432","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1567 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-15T17:11:10.991020Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.127675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:11:10.991104Z","caller":"traceutil/trace.go:171","msg":"trace[948880056] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1888; }","duration":"107.227148ms","start":"2024-08-15T17:11:10.883866Z","end":"2024-08-15T17:11:10.991093Z","steps":["trace[948880056] 'range keys from in-memory index tree'  (duration: 107.009282ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:14:36 up 8 min,  0 users,  load average: 0.09, 0.86, 0.61
	Linux addons-973562 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3534ecea3b438bf44120ffd8e4e6dc0eefcc0893a5383863ad0ddbd1353953b2] <==
	I0815 17:09:10.047175       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0815 17:10:03.349103       1 conn.go:339] Error on socket receive: read tcp 192.168.39.200:8443->192.168.39.1:55528: use of closed network connection
	E0815 17:10:03.566517       1 conn.go:339] Error on socket receive: read tcp 192.168.39.200:8443->192.168.39.1:55550: use of closed network connection
	E0815 17:10:39.660151       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0815 17:10:41.305778       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0815 17:10:48.561477       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0815 17:10:49.611612       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0815 17:10:54.654484       1 watch.go:250] "Unhandled Error" err="client disconnected" logger="UnhandledError"
	I0815 17:11:01.247561       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:11:01.247600       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:11:01.279100       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:11:01.279157       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:11:01.309740       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:11:01.310108       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:11:01.326796       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:11:01.326850       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:11:01.476226       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:11:01.476277       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:11:01.897488       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.241.9"}
	I0815 17:11:02.090731       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0815 17:11:02.264529       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.111.110"}
	W0815 17:11:02.327442       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0815 17:11:02.476908       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0815 17:11:02.481666       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0815 17:14:26.210087       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.11.254"}
	
	
	==> kube-controller-manager [3a83258a356d612266f08b866760653c18e1329481cde8025bb1b49412a4784f] <==
	W0815 17:13:03.555903       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:13:03.556098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:13:27.003817       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:13:27.003872       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:13:40.907549       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:13:40.907601       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:13:44.249780       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:13:44.249895       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:13:53.853261       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:13:53.853417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:14:15.460963       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:14:15.461194       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 17:14:26.033194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.017905ms"
	I0815 17:14:26.049560       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.688944ms"
	I0815 17:14:26.049777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="119.08µs"
	I0815 17:14:26.066659       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="29.688µs"
	I0815 17:14:28.208139       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0815 17:14:28.214508       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7559cbf597" duration="4.74µs"
	I0815 17:14:28.219410       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0815 17:14:29.496824       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:14:29.496940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 17:14:29.597059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="7.366669ms"
	I0815 17:14:29.597486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.087µs"
	W0815 17:14:35.028368       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:14:35.028421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [b76f57abbb47d25a09ada3d4c9c62d3c1f077dba5cc3555a0c7ee9cdb80b5afe] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 17:07:15.017266       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 17:07:15.029626       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	E0815 17:07:15.032460       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:07:15.119956       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 17:07:15.119989       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 17:07:15.120048       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:07:15.131076       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:07:15.131279       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:07:15.131290       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:07:15.133199       1 config.go:197] "Starting service config controller"
	I0815 17:07:15.133208       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:07:15.133223       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:07:15.133226       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:07:15.137158       1 config.go:326] "Starting node config controller"
	I0815 17:07:15.137168       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:07:15.234398       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 17:07:15.234435       1 shared_informer.go:320] Caches are synced for service config
	I0815 17:07:15.237373       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3128463831d397650135e24543103e466ce2084eee25be91527b03cd11840c97] <==
	W0815 17:07:06.176934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 17:07:06.178714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:06.176982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 17:07:06.178802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:06.177031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 17:07:06.178855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:06.177073       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 17:07:06.178907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:06.177121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 17:07:06.181578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:06.177168       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 17:07:06.181642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:06.177203       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 17:07:06.181708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:07.119923       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 17:07:07.120023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:07.197757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 17:07:07.197886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:07.209005       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:07:07.209098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:07.240143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 17:07:07.240192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:07.362515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 17:07:07.362687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 17:07:07.733213       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 17:14:19 addons-973562 kubelet[1222]: E0815 17:14:19.085561    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742059085164600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581817,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:14:26 addons-973562 kubelet[1222]: I0815 17:14:26.149346    1222 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrqqz\" (UniqueName: \"kubernetes.io/projected/2579b064-be76-41aa-8fd9-ea64aefd8eed-kube-api-access-nrqqz\") pod \"hello-world-app-55bf9c44b4-wzp2w\" (UID: \"2579b064-be76-41aa-8fd9-ea64aefd8eed\") " pod="default/hello-world-app-55bf9c44b4-wzp2w"
	Aug 15 17:14:27 addons-973562 kubelet[1222]: I0815 17:14:27.256105    1222 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24rxr\" (UniqueName: \"kubernetes.io/projected/af9ffb5d-8172-478e-bf4f-ce5fafaba75b-kube-api-access-24rxr\") pod \"af9ffb5d-8172-478e-bf4f-ce5fafaba75b\" (UID: \"af9ffb5d-8172-478e-bf4f-ce5fafaba75b\") "
	Aug 15 17:14:27 addons-973562 kubelet[1222]: I0815 17:14:27.258256    1222 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af9ffb5d-8172-478e-bf4f-ce5fafaba75b-kube-api-access-24rxr" (OuterVolumeSpecName: "kube-api-access-24rxr") pod "af9ffb5d-8172-478e-bf4f-ce5fafaba75b" (UID: "af9ffb5d-8172-478e-bf4f-ce5fafaba75b"). InnerVolumeSpecName "kube-api-access-24rxr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 17:14:27 addons-973562 kubelet[1222]: I0815 17:14:27.356856    1222 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-24rxr\" (UniqueName: \"kubernetes.io/projected/af9ffb5d-8172-478e-bf4f-ce5fafaba75b-kube-api-access-24rxr\") on node \"addons-973562\" DevicePath \"\""
	Aug 15 17:14:27 addons-973562 kubelet[1222]: I0815 17:14:27.565423    1222 scope.go:117] "RemoveContainer" containerID="19b3027f33e6aaa27d7b28f5090e5fea036a28e00ab4827458ca6e40d59b3369"
	Aug 15 17:14:27 addons-973562 kubelet[1222]: I0815 17:14:27.604268    1222 scope.go:117] "RemoveContainer" containerID="19b3027f33e6aaa27d7b28f5090e5fea036a28e00ab4827458ca6e40d59b3369"
	Aug 15 17:14:27 addons-973562 kubelet[1222]: E0815 17:14:27.604904    1222 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19b3027f33e6aaa27d7b28f5090e5fea036a28e00ab4827458ca6e40d59b3369\": container with ID starting with 19b3027f33e6aaa27d7b28f5090e5fea036a28e00ab4827458ca6e40d59b3369 not found: ID does not exist" containerID="19b3027f33e6aaa27d7b28f5090e5fea036a28e00ab4827458ca6e40d59b3369"
	Aug 15 17:14:27 addons-973562 kubelet[1222]: I0815 17:14:27.604942    1222 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19b3027f33e6aaa27d7b28f5090e5fea036a28e00ab4827458ca6e40d59b3369"} err="failed to get container status \"19b3027f33e6aaa27d7b28f5090e5fea036a28e00ab4827458ca6e40d59b3369\": rpc error: code = NotFound desc = could not find container \"19b3027f33e6aaa27d7b28f5090e5fea036a28e00ab4827458ca6e40d59b3369\": container with ID starting with 19b3027f33e6aaa27d7b28f5090e5fea036a28e00ab4827458ca6e40d59b3369 not found: ID does not exist"
	Aug 15 17:14:28 addons-973562 kubelet[1222]: I0815 17:14:28.782995    1222 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c970983-0af8-4cd3-8916-4dd1ca4e5933" path="/var/lib/kubelet/pods/3c970983-0af8-4cd3-8916-4dd1ca4e5933/volumes"
	Aug 15 17:14:28 addons-973562 kubelet[1222]: I0815 17:14:28.783759    1222 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb82555-c651-4039-94d8-fb7194aeb71a" path="/var/lib/kubelet/pods/7bb82555-c651-4039-94d8-fb7194aeb71a/volumes"
	Aug 15 17:14:28 addons-973562 kubelet[1222]: I0815 17:14:28.784285    1222 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af9ffb5d-8172-478e-bf4f-ce5fafaba75b" path="/var/lib/kubelet/pods/af9ffb5d-8172-478e-bf4f-ce5fafaba75b/volumes"
	Aug 15 17:14:29 addons-973562 kubelet[1222]: E0815 17:14:29.089130    1222 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742069088648562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581817,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:14:29 addons-973562 kubelet[1222]: E0815 17:14:29.089166    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742069088648562,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:581817,},InodesUsed:&UInt64Value{Value:205,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:14:31 addons-973562 kubelet[1222]: I0815 17:14:31.495539    1222 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6sxl\" (UniqueName: \"kubernetes.io/projected/db58bd7e-a4c1-4518-8807-759c581797eb-kube-api-access-r6sxl\") pod \"db58bd7e-a4c1-4518-8807-759c581797eb\" (UID: \"db58bd7e-a4c1-4518-8807-759c581797eb\") "
	Aug 15 17:14:31 addons-973562 kubelet[1222]: I0815 17:14:31.495597    1222 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db58bd7e-a4c1-4518-8807-759c581797eb-webhook-cert\") pod \"db58bd7e-a4c1-4518-8807-759c581797eb\" (UID: \"db58bd7e-a4c1-4518-8807-759c581797eb\") "
	Aug 15 17:14:31 addons-973562 kubelet[1222]: I0815 17:14:31.497796    1222 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db58bd7e-a4c1-4518-8807-759c581797eb-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "db58bd7e-a4c1-4518-8807-759c581797eb" (UID: "db58bd7e-a4c1-4518-8807-759c581797eb"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 15 17:14:31 addons-973562 kubelet[1222]: I0815 17:14:31.499211    1222 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db58bd7e-a4c1-4518-8807-759c581797eb-kube-api-access-r6sxl" (OuterVolumeSpecName: "kube-api-access-r6sxl") pod "db58bd7e-a4c1-4518-8807-759c581797eb" (UID: "db58bd7e-a4c1-4518-8807-759c581797eb"). InnerVolumeSpecName "kube-api-access-r6sxl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 17:14:31 addons-973562 kubelet[1222]: I0815 17:14:31.586709    1222 scope.go:117] "RemoveContainer" containerID="66b5c2534651154ee75983e7d1598e42f4a5bc43aa9356d1e93d84acc5a95d78"
	Aug 15 17:14:31 addons-973562 kubelet[1222]: I0815 17:14:31.595955    1222 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r6sxl\" (UniqueName: \"kubernetes.io/projected/db58bd7e-a4c1-4518-8807-759c581797eb-kube-api-access-r6sxl\") on node \"addons-973562\" DevicePath \"\""
	Aug 15 17:14:31 addons-973562 kubelet[1222]: I0815 17:14:31.595975    1222 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db58bd7e-a4c1-4518-8807-759c581797eb-webhook-cert\") on node \"addons-973562\" DevicePath \"\""
	Aug 15 17:14:31 addons-973562 kubelet[1222]: I0815 17:14:31.605800    1222 scope.go:117] "RemoveContainer" containerID="66b5c2534651154ee75983e7d1598e42f4a5bc43aa9356d1e93d84acc5a95d78"
	Aug 15 17:14:31 addons-973562 kubelet[1222]: E0815 17:14:31.606114    1222 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66b5c2534651154ee75983e7d1598e42f4a5bc43aa9356d1e93d84acc5a95d78\": container with ID starting with 66b5c2534651154ee75983e7d1598e42f4a5bc43aa9356d1e93d84acc5a95d78 not found: ID does not exist" containerID="66b5c2534651154ee75983e7d1598e42f4a5bc43aa9356d1e93d84acc5a95d78"
	Aug 15 17:14:31 addons-973562 kubelet[1222]: I0815 17:14:31.606141    1222 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66b5c2534651154ee75983e7d1598e42f4a5bc43aa9356d1e93d84acc5a95d78"} err="failed to get container status \"66b5c2534651154ee75983e7d1598e42f4a5bc43aa9356d1e93d84acc5a95d78\": rpc error: code = NotFound desc = could not find container \"66b5c2534651154ee75983e7d1598e42f4a5bc43aa9356d1e93d84acc5a95d78\": container with ID starting with 66b5c2534651154ee75983e7d1598e42f4a5bc43aa9356d1e93d84acc5a95d78 not found: ID does not exist"
	Aug 15 17:14:32 addons-973562 kubelet[1222]: I0815 17:14:32.782011    1222 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db58bd7e-a4c1-4518-8807-759c581797eb" path="/var/lib/kubelet/pods/db58bd7e-a4c1-4518-8807-759c581797eb/volumes"
	
	
	==> storage-provisioner [7a08fe240691c0b9be06b8345c98eef027070426d147fe5fd30b808dd98b725e] <==
	I0815 17:07:20.818609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 17:07:20.844232       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 17:07:20.844384       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 17:07:20.889431       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 17:07:20.890428       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-973562_365c5dec-5ae0-4e58-a19c-7bd73df05d0f!
	I0815 17:07:20.891446       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46b7b630-dbf4-4aa1-a49f-b9ac7c30c938", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-973562_365c5dec-5ae0-4e58-a19c-7bd73df05d0f became leader
	I0815 17:07:21.010510       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-973562_365c5dec-5ae0-4e58-a19c-7bd73df05d0f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-973562 -n addons-973562
helpers_test.go:261: (dbg) Run:  kubectl --context addons-973562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (215.44s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (349.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.341164ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-2rpw7" [5ccb0984-23af-4380-b4e7-c266d3917b45] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00383542s
addons_test.go:417: (dbg) Run:  kubectl --context addons-973562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-973562 top pods -n kube-system: exit status 1 (64.820922ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mpjgp, age: 3m19.791200975s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-973562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-973562 top pods -n kube-system: exit status 1 (69.032408ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mpjgp, age: 3m22.018493203s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-973562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-973562 top pods -n kube-system: exit status 1 (73.826169ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mpjgp, age: 3m24.854732509s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-973562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-973562 top pods -n kube-system: exit status 1 (63.291484ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mpjgp, age: 3m32.015558458s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-973562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-973562 top pods -n kube-system: exit status 1 (64.272024ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mpjgp, age: 3m39.197251916s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-973562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-973562 top pods -n kube-system: exit status 1 (63.557731ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mpjgp, age: 4m1.320890444s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-973562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-973562 top pods -n kube-system: exit status 1 (61.058692ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mpjgp, age: 4m26.18241552s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-973562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-973562 top pods -n kube-system: exit status 1 (60.717543ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mpjgp, age: 5m0.634530929s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-973562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-973562 top pods -n kube-system: exit status 1 (62.017917ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mpjgp, age: 5m37.095071367s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-973562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-973562 top pods -n kube-system: exit status 1 (62.787764ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mpjgp, age: 6m29.439294591s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-973562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-973562 top pods -n kube-system: exit status 1 (62.798544ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mpjgp, age: 7m49.816022956s

                                                
                                                
** /stderr **
addons_test.go:417: (dbg) Run:  kubectl --context addons-973562 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-973562 top pods -n kube-system: exit status 1 (59.568747ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-6f6b679f8f-mpjgp, age: 9m1.473110404s

                                                
                                                
** /stderr **
addons_test.go:431: failed checking metric server: exit status 1
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-973562 -n addons-973562
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-973562 logs -n 25: (1.223284273s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-379390                                                                     | download-only-379390 | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC | 15 Aug 24 17:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-174247 | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC |                     |
	|         | binary-mirror-174247                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41239                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-174247                                                                     | binary-mirror-174247 | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC | 15 Aug 24 17:06 UTC |
	| addons  | enable dashboard -p                                                                         | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC |                     |
	|         | addons-973562                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC |                     |
	|         | addons-973562                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-973562 --wait=true                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC | 15 Aug 24 17:09 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| ssh     | addons-973562 ssh cat                                                                       | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | /opt/local-path-provisioner/pvc-a475e29f-cfc6-4625-8bed-59ac85b175a1_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:11 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-973562 ip                                                                            | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | -p addons-973562                                                                            |                      |         |         |                     |                     |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:10 UTC |
	|         | addons-973562                                                                               |                      |         |         |                     |                     |
	| addons  | addons-973562 addons                                                                        | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:10 UTC | 15 Aug 24 17:11 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:11 UTC | 15 Aug 24 17:11 UTC |
	|         | addons-973562                                                                               |                      |         |         |                     |                     |
	| addons  | addons-973562 addons                                                                        | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:11 UTC | 15 Aug 24 17:11 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:11 UTC | 15 Aug 24 17:11 UTC |
	|         | -p addons-973562                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:12 UTC | 15 Aug 24 17:12 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-973562 ssh curl -s                                                                   | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:12 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-973562 ip                                                                            | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:14 UTC | 15 Aug 24 17:14 UTC |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:14 UTC | 15 Aug 24 17:14 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-973562 addons disable                                                                | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:14 UTC | 15 Aug 24 17:14 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-973562 addons                                                                        | addons-973562        | jenkins | v1.33.1 | 15 Aug 24 17:16 UTC | 15 Aug 24 17:16 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:06:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:06:25.300617   21063 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:06:25.300876   21063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:06:25.300886   21063 out.go:358] Setting ErrFile to fd 2...
	I0815 17:06:25.300890   21063 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:06:25.301072   21063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:06:25.301633   21063 out.go:352] Setting JSON to false
	I0815 17:06:25.302421   21063 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2931,"bootTime":1723738654,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:06:25.302472   21063 start.go:139] virtualization: kvm guest
	I0815 17:06:25.304709   21063 out.go:177] * [addons-973562] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:06:25.305859   21063 notify.go:220] Checking for updates...
	I0815 17:06:25.305898   21063 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:06:25.307151   21063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:06:25.308452   21063 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:06:25.309693   21063 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:06:25.310870   21063 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:06:25.311955   21063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:06:25.313273   21063 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:06:25.343867   21063 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 17:06:25.345406   21063 start.go:297] selected driver: kvm2
	I0815 17:06:25.345427   21063 start.go:901] validating driver "kvm2" against <nil>
	I0815 17:06:25.345438   21063 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:06:25.346089   21063 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:06:25.346151   21063 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 17:06:25.360253   21063 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 17:06:25.360304   21063 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:06:25.360548   21063 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:06:25.360630   21063 cni.go:84] Creating CNI manager for ""
	I0815 17:06:25.360647   21063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 17:06:25.360661   21063 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:06:25.360739   21063 start.go:340] cluster config:
	{Name:addons-973562 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-973562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:06:25.360844   21063 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:06:25.362675   21063 out.go:177] * Starting "addons-973562" primary control-plane node in "addons-973562" cluster
	I0815 17:06:25.364055   21063 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:06:25.364086   21063 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:06:25.364105   21063 cache.go:56] Caching tarball of preloaded images
	I0815 17:06:25.364203   21063 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:06:25.364237   21063 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:06:25.364614   21063 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/config.json ...
	I0815 17:06:25.364638   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/config.json: {Name:mkb53d52d787f17d133a7c9739d3e174f96bcdf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:25.364774   21063 start.go:360] acquireMachinesLock for addons-973562: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:06:25.364817   21063 start.go:364] duration metric: took 30.636µs to acquireMachinesLock for "addons-973562"
	I0815 17:06:25.364833   21063 start.go:93] Provisioning new machine with config: &{Name:addons-973562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:addons-973562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:06:25.364902   21063 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 17:06:25.366501   21063 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0815 17:06:25.366689   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:06:25.366733   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:06:25.380543   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I0815 17:06:25.380941   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:06:25.381487   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:06:25.381505   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:06:25.381817   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:06:25.381985   21063 main.go:141] libmachine: (addons-973562) Calling .GetMachineName
	I0815 17:06:25.382137   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:25.382262   21063 start.go:159] libmachine.API.Create for "addons-973562" (driver="kvm2")
	I0815 17:06:25.382283   21063 client.go:168] LocalClient.Create starting
	I0815 17:06:25.382312   21063 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem
	I0815 17:06:25.517440   21063 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem
	I0815 17:06:25.758135   21063 main.go:141] libmachine: Running pre-create checks...
	I0815 17:06:25.758162   21063 main.go:141] libmachine: (addons-973562) Calling .PreCreateCheck
	I0815 17:06:25.758620   21063 main.go:141] libmachine: (addons-973562) Calling .GetConfigRaw
	I0815 17:06:25.759012   21063 main.go:141] libmachine: Creating machine...
	I0815 17:06:25.759026   21063 main.go:141] libmachine: (addons-973562) Calling .Create
	I0815 17:06:25.759162   21063 main.go:141] libmachine: (addons-973562) Creating KVM machine...
	I0815 17:06:25.760400   21063 main.go:141] libmachine: (addons-973562) DBG | found existing default KVM network
	I0815 17:06:25.761287   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:25.761135   21085 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0815 17:06:25.761322   21063 main.go:141] libmachine: (addons-973562) DBG | created network xml: 
	I0815 17:06:25.761338   21063 main.go:141] libmachine: (addons-973562) DBG | <network>
	I0815 17:06:25.761346   21063 main.go:141] libmachine: (addons-973562) DBG |   <name>mk-addons-973562</name>
	I0815 17:06:25.761358   21063 main.go:141] libmachine: (addons-973562) DBG |   <dns enable='no'/>
	I0815 17:06:25.761370   21063 main.go:141] libmachine: (addons-973562) DBG |   
	I0815 17:06:25.761379   21063 main.go:141] libmachine: (addons-973562) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 17:06:25.761389   21063 main.go:141] libmachine: (addons-973562) DBG |     <dhcp>
	I0815 17:06:25.761394   21063 main.go:141] libmachine: (addons-973562) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 17:06:25.761403   21063 main.go:141] libmachine: (addons-973562) DBG |     </dhcp>
	I0815 17:06:25.761410   21063 main.go:141] libmachine: (addons-973562) DBG |   </ip>
	I0815 17:06:25.761416   21063 main.go:141] libmachine: (addons-973562) DBG |   
	I0815 17:06:25.761423   21063 main.go:141] libmachine: (addons-973562) DBG | </network>
	I0815 17:06:25.761433   21063 main.go:141] libmachine: (addons-973562) DBG | 
	I0815 17:06:25.766318   21063 main.go:141] libmachine: (addons-973562) DBG | trying to create private KVM network mk-addons-973562 192.168.39.0/24...
	I0815 17:06:25.827958   21063 main.go:141] libmachine: (addons-973562) DBG | private KVM network mk-addons-973562 192.168.39.0/24 created
	I0815 17:06:25.827983   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:25.827913   21085 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:06:25.827991   21063 main.go:141] libmachine: (addons-973562) Setting up store path in /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562 ...
	I0815 17:06:25.828003   21063 main.go:141] libmachine: (addons-973562) Building disk image from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 17:06:25.828060   21063 main.go:141] libmachine: (addons-973562) Downloading /home/jenkins/minikube-integration/19450-13013/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 17:06:26.073809   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:26.073693   21085 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa...
	I0815 17:06:26.207228   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:26.207070   21085 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/addons-973562.rawdisk...
	I0815 17:06:26.207263   21063 main.go:141] libmachine: (addons-973562) DBG | Writing magic tar header
	I0815 17:06:26.207279   21063 main.go:141] libmachine: (addons-973562) DBG | Writing SSH key tar header
	I0815 17:06:26.207289   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:26.207227   21085 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562 ...
	I0815 17:06:26.207831   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562
	I0815 17:06:26.207856   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines
	I0815 17:06:26.207869   21063 main.go:141] libmachine: (addons-973562) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562 (perms=drwx------)
	I0815 17:06:26.207884   21063 main.go:141] libmachine: (addons-973562) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines (perms=drwxr-xr-x)
	I0815 17:06:26.207894   21063 main.go:141] libmachine: (addons-973562) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube (perms=drwxr-xr-x)
	I0815 17:06:26.207907   21063 main.go:141] libmachine: (addons-973562) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013 (perms=drwxrwxr-x)
	I0815 17:06:26.207917   21063 main.go:141] libmachine: (addons-973562) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 17:06:26.207929   21063 main.go:141] libmachine: (addons-973562) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 17:06:26.207937   21063 main.go:141] libmachine: (addons-973562) Creating domain...
	I0815 17:06:26.207950   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:06:26.207961   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013
	I0815 17:06:26.207973   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 17:06:26.207982   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home/jenkins
	I0815 17:06:26.207990   21063 main.go:141] libmachine: (addons-973562) DBG | Checking permissions on dir: /home
	I0815 17:06:26.208001   21063 main.go:141] libmachine: (addons-973562) DBG | Skipping /home - not owner
	I0815 17:06:26.208963   21063 main.go:141] libmachine: (addons-973562) define libvirt domain using xml: 
	I0815 17:06:26.208986   21063 main.go:141] libmachine: (addons-973562) <domain type='kvm'>
	I0815 17:06:26.208994   21063 main.go:141] libmachine: (addons-973562)   <name>addons-973562</name>
	I0815 17:06:26.208999   21063 main.go:141] libmachine: (addons-973562)   <memory unit='MiB'>4000</memory>
	I0815 17:06:26.209004   21063 main.go:141] libmachine: (addons-973562)   <vcpu>2</vcpu>
	I0815 17:06:26.209009   21063 main.go:141] libmachine: (addons-973562)   <features>
	I0815 17:06:26.209014   21063 main.go:141] libmachine: (addons-973562)     <acpi/>
	I0815 17:06:26.209024   21063 main.go:141] libmachine: (addons-973562)     <apic/>
	I0815 17:06:26.209032   21063 main.go:141] libmachine: (addons-973562)     <pae/>
	I0815 17:06:26.209039   21063 main.go:141] libmachine: (addons-973562)     
	I0815 17:06:26.209048   21063 main.go:141] libmachine: (addons-973562)   </features>
	I0815 17:06:26.209055   21063 main.go:141] libmachine: (addons-973562)   <cpu mode='host-passthrough'>
	I0815 17:06:26.209062   21063 main.go:141] libmachine: (addons-973562)   
	I0815 17:06:26.209077   21063 main.go:141] libmachine: (addons-973562)   </cpu>
	I0815 17:06:26.209082   21063 main.go:141] libmachine: (addons-973562)   <os>
	I0815 17:06:26.209087   21063 main.go:141] libmachine: (addons-973562)     <type>hvm</type>
	I0815 17:06:26.209093   21063 main.go:141] libmachine: (addons-973562)     <boot dev='cdrom'/>
	I0815 17:06:26.209097   21063 main.go:141] libmachine: (addons-973562)     <boot dev='hd'/>
	I0815 17:06:26.209102   21063 main.go:141] libmachine: (addons-973562)     <bootmenu enable='no'/>
	I0815 17:06:26.209106   21063 main.go:141] libmachine: (addons-973562)   </os>
	I0815 17:06:26.209130   21063 main.go:141] libmachine: (addons-973562)   <devices>
	I0815 17:06:26.209147   21063 main.go:141] libmachine: (addons-973562)     <disk type='file' device='cdrom'>
	I0815 17:06:26.209159   21063 main.go:141] libmachine: (addons-973562)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/boot2docker.iso'/>
	I0815 17:06:26.209167   21063 main.go:141] libmachine: (addons-973562)       <target dev='hdc' bus='scsi'/>
	I0815 17:06:26.209176   21063 main.go:141] libmachine: (addons-973562)       <readonly/>
	I0815 17:06:26.209183   21063 main.go:141] libmachine: (addons-973562)     </disk>
	I0815 17:06:26.209189   21063 main.go:141] libmachine: (addons-973562)     <disk type='file' device='disk'>
	I0815 17:06:26.209197   21063 main.go:141] libmachine: (addons-973562)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 17:06:26.209207   21063 main.go:141] libmachine: (addons-973562)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/addons-973562.rawdisk'/>
	I0815 17:06:26.209213   21063 main.go:141] libmachine: (addons-973562)       <target dev='hda' bus='virtio'/>
	I0815 17:06:26.209269   21063 main.go:141] libmachine: (addons-973562)     </disk>
	I0815 17:06:26.209311   21063 main.go:141] libmachine: (addons-973562)     <interface type='network'>
	I0815 17:06:26.209327   21063 main.go:141] libmachine: (addons-973562)       <source network='mk-addons-973562'/>
	I0815 17:06:26.209339   21063 main.go:141] libmachine: (addons-973562)       <model type='virtio'/>
	I0815 17:06:26.209351   21063 main.go:141] libmachine: (addons-973562)     </interface>
	I0815 17:06:26.209368   21063 main.go:141] libmachine: (addons-973562)     <interface type='network'>
	I0815 17:06:26.209384   21063 main.go:141] libmachine: (addons-973562)       <source network='default'/>
	I0815 17:06:26.209394   21063 main.go:141] libmachine: (addons-973562)       <model type='virtio'/>
	I0815 17:06:26.209405   21063 main.go:141] libmachine: (addons-973562)     </interface>
	I0815 17:06:26.209415   21063 main.go:141] libmachine: (addons-973562)     <serial type='pty'>
	I0815 17:06:26.209427   21063 main.go:141] libmachine: (addons-973562)       <target port='0'/>
	I0815 17:06:26.209438   21063 main.go:141] libmachine: (addons-973562)     </serial>
	I0815 17:06:26.209451   21063 main.go:141] libmachine: (addons-973562)     <console type='pty'>
	I0815 17:06:26.209462   21063 main.go:141] libmachine: (addons-973562)       <target type='serial' port='0'/>
	I0815 17:06:26.209478   21063 main.go:141] libmachine: (addons-973562)     </console>
	I0815 17:06:26.209488   21063 main.go:141] libmachine: (addons-973562)     <rng model='virtio'>
	I0815 17:06:26.209498   21063 main.go:141] libmachine: (addons-973562)       <backend model='random'>/dev/random</backend>
	I0815 17:06:26.209509   21063 main.go:141] libmachine: (addons-973562)     </rng>
	I0815 17:06:26.209518   21063 main.go:141] libmachine: (addons-973562)     
	I0815 17:06:26.209528   21063 main.go:141] libmachine: (addons-973562)     
	I0815 17:06:26.209538   21063 main.go:141] libmachine: (addons-973562)   </devices>
	I0815 17:06:26.209548   21063 main.go:141] libmachine: (addons-973562) </domain>
	I0815 17:06:26.209557   21063 main.go:141] libmachine: (addons-973562) 
	I0815 17:06:26.216618   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:e9:78:fa in network default
	I0815 17:06:26.217167   21063 main.go:141] libmachine: (addons-973562) Ensuring networks are active...
	I0815 17:06:26.217213   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:26.217766   21063 main.go:141] libmachine: (addons-973562) Ensuring network default is active
	I0815 17:06:26.218072   21063 main.go:141] libmachine: (addons-973562) Ensuring network mk-addons-973562 is active
	I0815 17:06:26.219410   21063 main.go:141] libmachine: (addons-973562) Getting domain xml...
	I0815 17:06:26.220238   21063 main.go:141] libmachine: (addons-973562) Creating domain...
	I0815 17:06:27.621615   21063 main.go:141] libmachine: (addons-973562) Waiting to get IP...
	I0815 17:06:27.622275   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:27.622613   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:27.622675   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:27.622602   21085 retry.go:31] will retry after 276.809251ms: waiting for machine to come up
	I0815 17:06:27.901064   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:27.901555   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:27.901579   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:27.901508   21085 retry.go:31] will retry after 273.714625ms: waiting for machine to come up
	I0815 17:06:28.176976   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:28.177518   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:28.177547   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:28.177467   21085 retry.go:31] will retry after 425.434844ms: waiting for machine to come up
	I0815 17:06:28.603974   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:28.604406   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:28.604428   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:28.604345   21085 retry.go:31] will retry after 416.967692ms: waiting for machine to come up
	I0815 17:06:29.022650   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:29.023041   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:29.023061   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:29.023018   21085 retry.go:31] will retry after 604.334735ms: waiting for machine to come up
	I0815 17:06:29.630084   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:29.630530   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:29.630556   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:29.630479   21085 retry.go:31] will retry after 909.637578ms: waiting for machine to come up
	I0815 17:06:30.542174   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:30.542483   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:30.542505   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:30.542453   21085 retry.go:31] will retry after 1.052124898s: waiting for machine to come up
	I0815 17:06:31.595839   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:31.596218   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:31.596245   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:31.596183   21085 retry.go:31] will retry after 1.090139908s: waiting for machine to come up
	I0815 17:06:32.688285   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:32.688699   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:32.688728   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:32.688650   21085 retry.go:31] will retry after 1.368129262s: waiting for machine to come up
	I0815 17:06:34.059099   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:34.059591   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:34.059618   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:34.059543   21085 retry.go:31] will retry after 1.880437354s: waiting for machine to come up
	I0815 17:06:35.941488   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:35.941974   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:35.941999   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:35.941929   21085 retry.go:31] will retry after 2.253065386s: waiting for machine to come up
	I0815 17:06:38.197640   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:38.198068   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:38.198086   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:38.198040   21085 retry.go:31] will retry after 2.853822719s: waiting for machine to come up
	I0815 17:06:41.053413   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:41.053943   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:41.053974   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:41.053890   21085 retry.go:31] will retry after 2.751803169s: waiting for machine to come up
	I0815 17:06:43.808783   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:43.809125   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find current IP address of domain addons-973562 in network mk-addons-973562
	I0815 17:06:43.809153   21063 main.go:141] libmachine: (addons-973562) DBG | I0815 17:06:43.809109   21085 retry.go:31] will retry after 4.993758719s: waiting for machine to come up
	I0815 17:06:48.807086   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:48.807477   21063 main.go:141] libmachine: (addons-973562) Found IP for machine: 192.168.39.200
	I0815 17:06:48.807495   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has current primary IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:48.807501   21063 main.go:141] libmachine: (addons-973562) Reserving static IP address...
	I0815 17:06:48.807868   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find host DHCP lease matching {name: "addons-973562", mac: "52:54:00:71:0b:0e", ip: "192.168.39.200"} in network mk-addons-973562
	I0815 17:06:48.875728   21063 main.go:141] libmachine: (addons-973562) DBG | Getting to WaitForSSH function...
	I0815 17:06:48.875758   21063 main.go:141] libmachine: (addons-973562) Reserved static IP address: 192.168.39.200
	I0815 17:06:48.875771   21063 main.go:141] libmachine: (addons-973562) Waiting for SSH to be available...
	I0815 17:06:48.878185   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:48.878377   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562
	I0815 17:06:48.878403   21063 main.go:141] libmachine: (addons-973562) DBG | unable to find defined IP address of network mk-addons-973562 interface with MAC address 52:54:00:71:0b:0e
	I0815 17:06:48.878582   21063 main.go:141] libmachine: (addons-973562) DBG | Using SSH client type: external
	I0815 17:06:48.878601   21063 main.go:141] libmachine: (addons-973562) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa (-rw-------)
	I0815 17:06:48.878685   21063 main.go:141] libmachine: (addons-973562) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 17:06:48.878715   21063 main.go:141] libmachine: (addons-973562) DBG | About to run SSH command:
	I0815 17:06:48.878730   21063 main.go:141] libmachine: (addons-973562) DBG | exit 0
	I0815 17:06:48.889134   21063 main.go:141] libmachine: (addons-973562) DBG | SSH cmd err, output: exit status 255: 
	I0815 17:06:48.889163   21063 main.go:141] libmachine: (addons-973562) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0815 17:06:48.889171   21063 main.go:141] libmachine: (addons-973562) DBG | command : exit 0
	I0815 17:06:48.889179   21063 main.go:141] libmachine: (addons-973562) DBG | err     : exit status 255
	I0815 17:06:48.889223   21063 main.go:141] libmachine: (addons-973562) DBG | output  : 
	I0815 17:06:51.889905   21063 main.go:141] libmachine: (addons-973562) DBG | Getting to WaitForSSH function...
	I0815 17:06:51.892059   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:51.892507   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:51.892539   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:51.892705   21063 main.go:141] libmachine: (addons-973562) DBG | Using SSH client type: external
	I0815 17:06:51.892735   21063 main.go:141] libmachine: (addons-973562) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa (-rw-------)
	I0815 17:06:51.892765   21063 main.go:141] libmachine: (addons-973562) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.200 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 17:06:51.892777   21063 main.go:141] libmachine: (addons-973562) DBG | About to run SSH command:
	I0815 17:06:51.892790   21063 main.go:141] libmachine: (addons-973562) DBG | exit 0
	I0815 17:06:52.016381   21063 main.go:141] libmachine: (addons-973562) DBG | SSH cmd err, output: <nil>: 
	I0815 17:06:52.016665   21063 main.go:141] libmachine: (addons-973562) KVM machine creation complete!
	I0815 17:06:52.016952   21063 main.go:141] libmachine: (addons-973562) Calling .GetConfigRaw
	I0815 17:06:52.017450   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:52.017641   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:52.017792   21063 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 17:06:52.017807   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:06:52.018890   21063 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 17:06:52.018903   21063 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 17:06:52.018910   21063 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 17:06:52.018916   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.020950   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.021331   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.021362   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.021524   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:52.021690   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.021849   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.021983   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:52.022169   21063 main.go:141] libmachine: Using SSH client type: native
	I0815 17:06:52.022404   21063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0815 17:06:52.022417   21063 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 17:06:52.127769   21063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:06:52.127789   21063 main.go:141] libmachine: Detecting the provisioner...
	I0815 17:06:52.127796   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.130399   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.130715   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.130745   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.130947   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:52.131241   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.131413   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.131533   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:52.131785   21063 main.go:141] libmachine: Using SSH client type: native
	I0815 17:06:52.131943   21063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0815 17:06:52.131954   21063 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 17:06:52.241324   21063 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 17:06:52.241395   21063 main.go:141] libmachine: found compatible host: buildroot
	I0815 17:06:52.241408   21063 main.go:141] libmachine: Provisioning with buildroot...
	I0815 17:06:52.241421   21063 main.go:141] libmachine: (addons-973562) Calling .GetMachineName
	I0815 17:06:52.241662   21063 buildroot.go:166] provisioning hostname "addons-973562"
	I0815 17:06:52.241688   21063 main.go:141] libmachine: (addons-973562) Calling .GetMachineName
	I0815 17:06:52.241857   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.244517   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.244863   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.244892   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.245007   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:52.245201   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.245347   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.245492   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:52.245659   21063 main.go:141] libmachine: Using SSH client type: native
	I0815 17:06:52.245843   21063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0815 17:06:52.245856   21063 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-973562 && echo "addons-973562" | sudo tee /etc/hostname
	I0815 17:06:52.368381   21063 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-973562
	
	I0815 17:06:52.368402   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.370731   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.371058   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.371097   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.371229   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:52.371392   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.371564   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.371697   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:52.371845   21063 main.go:141] libmachine: Using SSH client type: native
	I0815 17:06:52.372011   21063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0815 17:06:52.372032   21063 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-973562' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-973562/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-973562' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:06:52.490787   21063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:06:52.490817   21063 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 17:06:52.490861   21063 buildroot.go:174] setting up certificates
	I0815 17:06:52.490874   21063 provision.go:84] configureAuth start
	I0815 17:06:52.490886   21063 main.go:141] libmachine: (addons-973562) Calling .GetMachineName
	I0815 17:06:52.491131   21063 main.go:141] libmachine: (addons-973562) Calling .GetIP
	I0815 17:06:52.493378   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.493682   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.493709   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.493870   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.495814   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.496141   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.496167   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.496260   21063 provision.go:143] copyHostCerts
	I0815 17:06:52.496333   21063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 17:06:52.496465   21063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 17:06:52.496561   21063 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 17:06:52.496630   21063 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.addons-973562 san=[127.0.0.1 192.168.39.200 addons-973562 localhost minikube]
	I0815 17:06:52.582245   21063 provision.go:177] copyRemoteCerts
	I0815 17:06:52.582303   21063 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:06:52.582323   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.585055   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.585398   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.585426   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.585594   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:52.585769   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.585923   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:52.586079   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:06:52.672532   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 17:06:52.698488   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 17:06:52.723546   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 17:06:52.746433   21063 provision.go:87] duration metric: took 255.546254ms to configureAuth
	I0815 17:06:52.746474   21063 buildroot.go:189] setting minikube options for container-runtime
	I0815 17:06:52.746699   21063 config.go:182] Loaded profile config "addons-973562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:06:52.746775   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:52.749226   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.749539   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:52.749571   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:52.749750   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:52.749917   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.750072   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:52.750235   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:52.750379   21063 main.go:141] libmachine: Using SSH client type: native
	I0815 17:06:52.750598   21063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0815 17:06:52.750619   21063 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:06:53.010465   21063 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:06:53.010500   21063 main.go:141] libmachine: Checking connection to Docker...
	I0815 17:06:53.010511   21063 main.go:141] libmachine: (addons-973562) Calling .GetURL
	I0815 17:06:53.011924   21063 main.go:141] libmachine: (addons-973562) DBG | Using libvirt version 6000000
	I0815 17:06:53.013830   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.014152   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.014180   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.014291   21063 main.go:141] libmachine: Docker is up and running!
	I0815 17:06:53.014306   21063 main.go:141] libmachine: Reticulating splines...
	I0815 17:06:53.014314   21063 client.go:171] duration metric: took 27.632024015s to LocalClient.Create
	I0815 17:06:53.014341   21063 start.go:167] duration metric: took 27.632078412s to libmachine.API.Create "addons-973562"
	I0815 17:06:53.014357   21063 start.go:293] postStartSetup for "addons-973562" (driver="kvm2")
	I0815 17:06:53.014372   21063 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:06:53.014392   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:53.014616   21063 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:06:53.014638   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:53.016567   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.016877   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.016905   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.017056   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:53.017222   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:53.017373   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:53.017503   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:06:53.098968   21063 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:06:53.103157   21063 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 17:06:53.103183   21063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 17:06:53.103263   21063 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 17:06:53.103293   21063 start.go:296] duration metric: took 88.925638ms for postStartSetup
	I0815 17:06:53.103329   21063 main.go:141] libmachine: (addons-973562) Calling .GetConfigRaw
	I0815 17:06:53.103874   21063 main.go:141] libmachine: (addons-973562) Calling .GetIP
	I0815 17:06:53.106235   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.106574   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.106607   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.106839   21063 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/config.json ...
	I0815 17:06:53.107053   21063 start.go:128] duration metric: took 27.742142026s to createHost
	I0815 17:06:53.107086   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:53.109206   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.109503   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.109530   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.109639   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:53.109797   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:53.109950   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:53.110046   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:53.110192   21063 main.go:141] libmachine: Using SSH client type: native
	I0815 17:06:53.110370   21063 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.200 22 <nil> <nil>}
	I0815 17:06:53.110381   21063 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 17:06:53.217031   21063 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723741613.197041360
	
	I0815 17:06:53.217057   21063 fix.go:216] guest clock: 1723741613.197041360
	I0815 17:06:53.217067   21063 fix.go:229] Guest: 2024-08-15 17:06:53.19704136 +0000 UTC Remote: 2024-08-15 17:06:53.10706892 +0000 UTC m=+27.845466349 (delta=89.97244ms)
	I0815 17:06:53.217091   21063 fix.go:200] guest clock delta is within tolerance: 89.97244ms
	I0815 17:06:53.217099   21063 start.go:83] releasing machines lock for "addons-973562", held for 27.852271909s
	I0815 17:06:53.217123   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:53.217381   21063 main.go:141] libmachine: (addons-973562) Calling .GetIP
	I0815 17:06:53.219809   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.220126   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.220150   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.220293   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:53.220778   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:53.220940   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:06:53.221015   21063 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:06:53.221061   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:53.221171   21063 ssh_runner.go:195] Run: cat /version.json
	I0815 17:06:53.221191   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:06:53.223835   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.223924   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.224160   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.224185   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.224217   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:53.224237   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:53.224303   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:53.224517   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:06:53.224540   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:53.224706   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:53.224739   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:06:53.224874   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:06:53.224939   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:06:53.225081   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:06:53.327198   21063 ssh_runner.go:195] Run: systemctl --version
	I0815 17:06:53.333352   21063 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:06:53.493783   21063 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 17:06:53.499868   21063 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 17:06:53.499943   21063 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:06:53.515938   21063 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 17:06:53.515961   21063 start.go:495] detecting cgroup driver to use...
	I0815 17:06:53.516020   21063 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:06:53.530930   21063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:06:53.544880   21063 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:06:53.544944   21063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:06:53.558070   21063 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:06:53.571022   21063 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:06:53.679728   21063 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:06:53.828460   21063 docker.go:233] disabling docker service ...
	I0815 17:06:53.828542   21063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:06:53.843608   21063 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:06:53.855704   21063 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:06:53.999429   21063 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:06:54.127017   21063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:06:54.140531   21063 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:06:54.157960   21063 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:06:54.158016   21063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.167667   21063 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:06:54.167721   21063 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.177591   21063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.187666   21063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.197324   21063 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:06:54.207319   21063 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.217036   21063 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.233456   21063 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:06:54.243417   21063 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:06:54.252476   21063 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 17:06:54.252554   21063 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 17:06:54.264858   21063 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:06:54.274225   21063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:06:54.396433   21063 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:06:54.532868   21063 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:06:54.532971   21063 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:06:54.537631   21063 start.go:563] Will wait 60s for crictl version
	I0815 17:06:54.537703   21063 ssh_runner.go:195] Run: which crictl
	I0815 17:06:54.541277   21063 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:06:54.580399   21063 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 17:06:54.580528   21063 ssh_runner.go:195] Run: crio --version
	I0815 17:06:54.608318   21063 ssh_runner.go:195] Run: crio --version
	I0815 17:06:54.638666   21063 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 17:06:54.639920   21063 main.go:141] libmachine: (addons-973562) Calling .GetIP
	I0815 17:06:54.642151   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:54.642461   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:06:54.642481   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:06:54.642800   21063 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 17:06:54.646908   21063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:06:54.658950   21063 kubeadm.go:883] updating cluster {Name:addons-973562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:addons-973562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 17:06:54.659048   21063 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:06:54.659090   21063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:06:54.691040   21063 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 17:06:54.691101   21063 ssh_runner.go:195] Run: which lz4
	I0815 17:06:54.695285   21063 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 17:06:54.699359   21063 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 17:06:54.699381   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 17:06:55.937481   21063 crio.go:462] duration metric: took 1.242223137s to copy over tarball
	I0815 17:06:55.937548   21063 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 17:06:58.041515   21063 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.103934922s)
	I0815 17:06:58.041556   21063 crio.go:469] duration metric: took 2.104046807s to extract the tarball
	I0815 17:06:58.041567   21063 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 17:06:58.078406   21063 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:06:58.119965   21063 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:06:58.119986   21063 cache_images.go:84] Images are preloaded, skipping loading
	I0815 17:06:58.119995   21063 kubeadm.go:934] updating node { 192.168.39.200 8443 v1.31.0 crio true true} ...
	I0815 17:06:58.120117   21063 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-973562 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.200
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-973562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:06:58.120205   21063 ssh_runner.go:195] Run: crio config
	I0815 17:06:58.166976   21063 cni.go:84] Creating CNI manager for ""
	I0815 17:06:58.166994   21063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 17:06:58.167003   21063 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 17:06:58.167022   21063 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.200 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-973562 NodeName:addons-973562 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.200"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.200 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 17:06:58.167168   21063 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.200
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-973562"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.200
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.200"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 17:06:58.167242   21063 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:06:58.177137   21063 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:06:58.177198   21063 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 17:06:58.186673   21063 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0815 17:06:58.202335   21063 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:06:58.217882   21063 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0815 17:06:58.234161   21063 ssh_runner.go:195] Run: grep 192.168.39.200	control-plane.minikube.internal$ /etc/hosts
	I0815 17:06:58.237763   21063 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.200	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:06:58.249671   21063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:06:58.355061   21063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:06:58.370643   21063 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562 for IP: 192.168.39.200
	I0815 17:06:58.370667   21063 certs.go:194] generating shared ca certs ...
	I0815 17:06:58.370685   21063 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.370823   21063 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 17:06:58.566505   21063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt ...
	I0815 17:06:58.566532   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt: {Name:mk7b3c266988c3bf447b0d5846e34249420d4046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.566712   21063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key ...
	I0815 17:06:58.566725   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key: {Name:mk989a7f98c08ab9bacc7aac0e5b4671d9feab8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.566822   21063 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 17:06:58.663712   21063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt ...
	I0815 17:06:58.663739   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt: {Name:mk107ae151027de9139f76d73fd7a7d8b4333fa2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.663898   21063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key ...
	I0815 17:06:58.663910   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key: {Name:mkbad363b34cde2c9295a09e950bde4265a6d910 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.664006   21063 certs.go:256] generating profile certs ...
	I0815 17:06:58.664058   21063 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.key
	I0815 17:06:58.664074   21063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt with IP's: []
	I0815 17:06:58.768925   21063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt ...
	I0815 17:06:58.768953   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: {Name:mk028cecbbd4c3c93083dc96c7b6732f9f2b764d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.769113   21063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.key ...
	I0815 17:06:58.769130   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.key: {Name:mk8eca70504a08a964c72dbf724341e25251229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:58.769223   21063 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.key.a68f4c30
	I0815 17:06:58.769248   21063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.crt.a68f4c30 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.200]
	I0815 17:06:59.250823   21063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.crt.a68f4c30 ...
	I0815 17:06:59.250851   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.crt.a68f4c30: {Name:mk452cee6d844e8db9f303b800b52ce910162df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:59.250997   21063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.key.a68f4c30 ...
	I0815 17:06:59.251010   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.key.a68f4c30: {Name:mk5364c42f311a455bf4483779b819cd363dcebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:59.251079   21063 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.crt.a68f4c30 -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.crt
	I0815 17:06:59.251153   21063 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.key.a68f4c30 -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.key
	I0815 17:06:59.251198   21063 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.key
	I0815 17:06:59.251216   21063 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.crt with IP's: []
	I0815 17:06:59.466006   21063 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.crt ...
	I0815 17:06:59.466035   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.crt: {Name:mkdb18e9115569cc98aa6c1385fdd768e627bc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:59.466183   21063 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.key ...
	I0815 17:06:59.466193   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.key: {Name:mk9a467ec3acb971b0b82158cf4e08112b45e20f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:59.466350   21063 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 17:06:59.466383   21063 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 17:06:59.466406   21063 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:06:59.466429   21063 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 17:06:59.467006   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:06:59.496035   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:06:59.519242   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:06:59.541831   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 17:06:59.564643   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 17:06:59.586785   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 17:06:59.608298   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:06:59.630044   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:06:59.652196   21063 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:06:59.674551   21063 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 17:06:59.690517   21063 ssh_runner.go:195] Run: openssl version
	I0815 17:06:59.696100   21063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:06:59.706870   21063 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:06:59.711145   21063 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:06:59.711200   21063 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:06:59.716928   21063 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:06:59.727695   21063 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:06:59.731500   21063 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 17:06:59.731544   21063 kubeadm.go:392] StartCluster: {Name:addons-973562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 C
lusterName:addons-973562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:06:59.731607   21063 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 17:06:59.731642   21063 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 17:06:59.771186   21063 cri.go:89] found id: ""
	I0815 17:06:59.771261   21063 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 17:06:59.781486   21063 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 17:06:59.791032   21063 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 17:06:59.800272   21063 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 17:06:59.800289   21063 kubeadm.go:157] found existing configuration files:
	
	I0815 17:06:59.800334   21063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 17:06:59.809341   21063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 17:06:59.809404   21063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 17:06:59.818541   21063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 17:06:59.827474   21063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 17:06:59.827527   21063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 17:06:59.836888   21063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 17:06:59.845822   21063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 17:06:59.845863   21063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 17:06:59.855219   21063 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 17:06:59.864182   21063 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 17:06:59.864228   21063 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 17:06:59.873365   21063 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 17:06:59.923497   21063 kubeadm.go:310] W0815 17:06:59.909291     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:06:59.924454   21063 kubeadm.go:310] W0815 17:06:59.910526     837 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:07:00.041221   21063 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 17:07:09.465409   21063 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 17:07:09.465482   21063 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 17:07:09.465578   21063 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 17:07:09.465678   21063 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 17:07:09.465812   21063 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 17:07:09.465902   21063 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 17:07:09.467564   21063 out.go:235]   - Generating certificates and keys ...
	I0815 17:07:09.467652   21063 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 17:07:09.467726   21063 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 17:07:09.467817   21063 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 17:07:09.467904   21063 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 17:07:09.467967   21063 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 17:07:09.468009   21063 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 17:07:09.468061   21063 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 17:07:09.468225   21063 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-973562 localhost] and IPs [192.168.39.200 127.0.0.1 ::1]
	I0815 17:07:09.468312   21063 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 17:07:09.468460   21063 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-973562 localhost] and IPs [192.168.39.200 127.0.0.1 ::1]
	I0815 17:07:09.468548   21063 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 17:07:09.468625   21063 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 17:07:09.468697   21063 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 17:07:09.468780   21063 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 17:07:09.468837   21063 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 17:07:09.468885   21063 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 17:07:09.468932   21063 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 17:07:09.468987   21063 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 17:07:09.469032   21063 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 17:07:09.469111   21063 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 17:07:09.469192   21063 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 17:07:09.470679   21063 out.go:235]   - Booting up control plane ...
	I0815 17:07:09.470756   21063 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 17:07:09.470869   21063 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 17:07:09.470956   21063 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 17:07:09.471093   21063 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 17:07:09.471216   21063 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 17:07:09.471282   21063 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 17:07:09.471427   21063 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 17:07:09.471519   21063 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 17:07:09.471603   21063 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.923908ms
	I0815 17:07:09.471710   21063 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 17:07:09.471790   21063 kubeadm.go:310] [api-check] The API server is healthy after 5.001999725s
	I0815 17:07:09.471911   21063 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 17:07:09.472090   21063 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 17:07:09.472169   21063 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 17:07:09.472400   21063 kubeadm.go:310] [mark-control-plane] Marking the node addons-973562 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 17:07:09.472494   21063 kubeadm.go:310] [bootstrap-token] Using token: u6ujye.vut6y5k8jcesrskl
	I0815 17:07:09.474028   21063 out.go:235]   - Configuring RBAC rules ...
	I0815 17:07:09.474138   21063 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 17:07:09.474245   21063 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 17:07:09.474415   21063 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 17:07:09.474565   21063 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 17:07:09.474728   21063 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 17:07:09.474830   21063 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 17:07:09.474989   21063 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 17:07:09.475053   21063 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 17:07:09.475119   21063 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 17:07:09.475128   21063 kubeadm.go:310] 
	I0815 17:07:09.475213   21063 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 17:07:09.475222   21063 kubeadm.go:310] 
	I0815 17:07:09.475333   21063 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 17:07:09.475347   21063 kubeadm.go:310] 
	I0815 17:07:09.475392   21063 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 17:07:09.475474   21063 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 17:07:09.475549   21063 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 17:07:09.475556   21063 kubeadm.go:310] 
	I0815 17:07:09.475623   21063 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 17:07:09.475633   21063 kubeadm.go:310] 
	I0815 17:07:09.475676   21063 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 17:07:09.475682   21063 kubeadm.go:310] 
	I0815 17:07:09.475740   21063 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 17:07:09.475824   21063 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 17:07:09.475883   21063 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 17:07:09.475889   21063 kubeadm.go:310] 
	I0815 17:07:09.475966   21063 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 17:07:09.476049   21063 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 17:07:09.476056   21063 kubeadm.go:310] 
	I0815 17:07:09.476120   21063 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token u6ujye.vut6y5k8jcesrskl \
	I0815 17:07:09.476224   21063 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 \
	I0815 17:07:09.476243   21063 kubeadm.go:310] 	--control-plane 
	I0815 17:07:09.476249   21063 kubeadm.go:310] 
	I0815 17:07:09.476311   21063 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 17:07:09.476316   21063 kubeadm.go:310] 
	I0815 17:07:09.476390   21063 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token u6ujye.vut6y5k8jcesrskl \
	I0815 17:07:09.476497   21063 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 
	I0815 17:07:09.476514   21063 cni.go:84] Creating CNI manager for ""
	I0815 17:07:09.476524   21063 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 17:07:09.477944   21063 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 17:07:09.479065   21063 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 17:07:09.490273   21063 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 17:07:09.509311   21063 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 17:07:09.509370   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:09.509380   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-973562 minikube.k8s.io/updated_at=2024_08_15T17_07_09_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=addons-973562 minikube.k8s.io/primary=true
	I0815 17:07:09.671048   21063 ops.go:34] apiserver oom_adj: -16
	I0815 17:07:09.671188   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:10.172141   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:10.671946   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:11.172027   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:11.672171   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:12.171202   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:12.671775   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:13.171846   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:13.671576   21063 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:07:13.787398   21063 kubeadm.go:1113] duration metric: took 4.278072461s to wait for elevateKubeSystemPrivileges
	I0815 17:07:13.787440   21063 kubeadm.go:394] duration metric: took 14.055899581s to StartCluster
	I0815 17:07:13.787463   21063 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:07:13.787606   21063 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:07:13.788173   21063 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:07:13.788392   21063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 17:07:13.788430   21063 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.200 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:07:13.788508   21063 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 17:07:13.788601   21063 addons.go:69] Setting yakd=true in profile "addons-973562"
	I0815 17:07:13.788622   21063 addons.go:69] Setting gcp-auth=true in profile "addons-973562"
	I0815 17:07:13.788637   21063 addons.go:234] Setting addon yakd=true in "addons-973562"
	I0815 17:07:13.788692   21063 mustload.go:65] Loading cluster: addons-973562
	I0815 17:07:13.788707   21063 addons.go:69] Setting default-storageclass=true in profile "addons-973562"
	I0815 17:07:13.788708   21063 config.go:182] Loaded profile config "addons-973562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:07:13.788693   21063 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-973562"
	I0815 17:07:13.788959   21063 addons.go:69] Setting helm-tiller=true in profile "addons-973562"
	I0815 17:07:13.789030   21063 addons.go:234] Setting addon helm-tiller=true in "addons-973562"
	I0815 17:07:13.788909   21063 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-973562"
	I0815 17:07:13.789063   21063 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-973562"
	I0815 17:07:13.789072   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789103   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789098   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789124   21063 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-973562"
	I0815 17:07:13.789063   21063 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-973562"
	I0815 17:07:13.789220   21063 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-973562"
	I0815 17:07:13.789230   21063 addons.go:69] Setting ingress=true in profile "addons-973562"
	I0815 17:07:13.789234   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789237   21063 addons.go:69] Setting cloud-spanner=true in profile "addons-973562"
	I0815 17:07:13.789244   21063 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-973562"
	I0815 17:07:13.789253   21063 addons.go:69] Setting ingress-dns=true in profile "addons-973562"
	I0815 17:07:13.789271   21063 addons.go:234] Setting addon cloud-spanner=true in "addons-973562"
	I0815 17:07:13.789277   21063 addons.go:234] Setting addon ingress-dns=true in "addons-973562"
	I0815 17:07:13.789296   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789308   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789435   21063 addons.go:69] Setting registry=true in profile "addons-973562"
	I0815 17:07:13.789487   21063 addons.go:234] Setting addon registry=true in "addons-973562"
	I0815 17:07:13.789517   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789694   21063 addons.go:69] Setting storage-provisioner=true in profile "addons-973562"
	I0815 17:07:13.789721   21063 addons.go:234] Setting addon storage-provisioner=true in "addons-973562"
	I0815 17:07:13.789729   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.789742   21063 addons.go:69] Setting inspektor-gadget=true in profile "addons-973562"
	I0815 17:07:13.789751   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789756   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.789772   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.789778   21063 addons.go:234] Setting addon inspektor-gadget=true in "addons-973562"
	I0815 17:07:13.789794   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.789803   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.789806   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789874   21063 addons.go:69] Setting volumesnapshots=true in profile "addons-973562"
	I0815 17:07:13.789917   21063 addons.go:234] Setting addon volumesnapshots=true in "addons-973562"
	I0815 17:07:13.789951   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.789997   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790044   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790164   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790191   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790260   21063 addons.go:69] Setting volcano=true in profile "addons-973562"
	I0815 17:07:13.790283   21063 addons.go:69] Setting metrics-server=true in profile "addons-973562"
	I0815 17:07:13.790326   21063 addons.go:234] Setting addon volcano=true in "addons-973562"
	I0815 17:07:13.790361   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.790366   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790413   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790531   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790568   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790264   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790646   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790749   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790799   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790876   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790909   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.790913   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.790969   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.789760   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.791154   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.791217   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.791421   21063 addons.go:234] Setting addon ingress=true in "addons-973562"
	I0815 17:07:13.791679   21063 config.go:182] Loaded profile config "addons-973562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:07:13.791847   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.802121   21063 out.go:177] * Verifying Kubernetes components...
	I0815 17:07:13.802591   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.802689   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.808432   21063 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:07:13.790329   21063 addons.go:234] Setting addon metrics-server=true in "addons-973562"
	I0815 17:07:13.808855   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.809400   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.809459   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.812179   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33667
	I0815 17:07:13.790885   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.812303   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.812441   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35447
	I0815 17:07:13.812881   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.813170   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.813529   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.813565   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.813681   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.813701   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.814016   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.814038   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.814706   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.814719   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.814754   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.814760   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.814989   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0815 17:07:13.821506   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.821553   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.821885   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.828448   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34739
	I0815 17:07:13.828624   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33369
	I0815 17:07:13.829811   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.829894   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.836898   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.837093   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45427
	I0815 17:07:13.837215   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36885
	I0815 17:07:13.837388   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.837567   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.838142   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.838186   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.839157   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.839369   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.839389   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.839444   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.839474   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.839764   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.839849   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.839925   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.840117   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.840137   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.840207   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.840457   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40007
	I0815 17:07:13.840640   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.840805   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.841457   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.841496   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.845332   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.845496   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.845518   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.845886   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.845924   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.846837   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.847494   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.847531   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.847766   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.848332   21063 addons.go:234] Setting addon default-storageclass=true in "addons-973562"
	I0815 17:07:13.848380   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.848442   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.848477   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.848950   21063 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-973562"
	I0815 17:07:13.848984   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.849545   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.854777   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39877
	I0815 17:07:13.855279   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.859114   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40281
	I0815 17:07:13.860518   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I0815 17:07:13.861061   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.861199   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.861322   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.861272   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.861467   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.861571   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.861584   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.861922   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.862484   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.862519   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.862918   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.862940   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.863305   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.863842   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.863877   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.864460   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.864476   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.864898   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.865946   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.865988   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.874875   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42515
	I0815 17:07:13.875131   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38813
	I0815 17:07:13.875624   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.875730   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.876307   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.876328   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.876435   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.876456   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.876722   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.876778   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.877292   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.877329   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.877549   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33081
	I0815 17:07:13.879745   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44623
	I0815 17:07:13.879747   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.879828   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.880084   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.880602   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.880617   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.880683   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35583
	I0815 17:07:13.881003   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.881116   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.881569   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.881586   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.881637   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.881644   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.881669   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.882153   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.882316   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.882539   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.882562   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.883586   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.884131   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.884164   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.884374   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.884429   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I0815 17:07:13.884437   21063 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 17:07:13.884818   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.885255   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.885280   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.885581   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.885760   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.885920   21063 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 17:07:13.885939   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 17:07:13.885954   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.886038   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 17:07:13.886911   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33027
	I0815 17:07:13.887294   21063 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 17:07:13.887312   21063 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 17:07:13.887331   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.888018   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.888264   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:13.888276   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:13.890081   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:13.890090   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.890122   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:13.890134   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:13.890143   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:13.890156   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:13.890562   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:13.890580   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:13.890596   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:13.890604   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36935
	W0815 17:07:13.890693   21063 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0815 17:07:13.890962   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.890990   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.891160   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.891339   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.891487   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.891540   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.891657   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.891950   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.891968   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.892100   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.892288   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.892442   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.892573   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.897696   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33513
	I0815 17:07:13.898242   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.898799   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.898819   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.899184   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.899380   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.901052   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.902476   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.903035   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35117
	I0815 17:07:13.903220   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.903284   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.903299   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.903825   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.903885   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.903961   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.903980   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.904188   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.904505   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.904527   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.904975   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.905193   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.905263   21063 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 17:07:13.906331   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39037
	I0815 17:07:13.906477   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.906606   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.906754   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.906840   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.906915   21063 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 17:07:13.906926   21063 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 17:07:13.906943   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.907302   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.907316   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.908026   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.908169   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 17:07:13.908353   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.909150   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.909233   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.909276   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.909559   21063 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 17:07:13.910792   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 17:07:13.911467   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.910803   21063 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 17:07:13.911935   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.911972   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.912152   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.912335   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.912469   21063 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:07:13.912520   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.912666   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.913719   21063 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 17:07:13.913894   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39973
	I0815 17:07:13.913925   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42195
	I0815 17:07:13.914464   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 17:07:13.914553   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.914639   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.915157   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.915174   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.915319   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.915332   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.915709   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.915769   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.915822   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0815 17:07:13.915916   21063 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 17:07:13.915927   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 17:07:13.915939   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.915971   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.916154   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.916278   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.916340   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32879
	I0815 17:07:13.916570   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.916582   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.916797   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.916889   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.917503   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.917532   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.918098   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.918114   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.918390   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44963
	I0815 17:07:13.918477   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.918555   21063 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:07:13.918790   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.918895   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.919602   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 17:07:13.919739   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.919954   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.919970   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.920269   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.920284   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.920303   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.920467   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.920556   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.920653   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.920686   21063 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 17:07:13.920710   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 17:07:13.920730   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.920998   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.921035   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.921250   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.921388   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.921661   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.922245   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:13.922571   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 17:07:13.922588   21063 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 17:07:13.922627   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:13.922644   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:13.923869   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.923937   21063 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:07:13.923963   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 17:07:13.923979   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.923939   21063 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 17:07:13.924795   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.925088   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.925600   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0815 17:07:13.925606   21063 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 17:07:13.925624   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.925626   21063 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 17:07:13.925647   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.925785   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.925908   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.926019   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.927959   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 17:07:13.928387   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.928585   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.928617   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.928629   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.928848   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.929039   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.929155   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.929998   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.930353   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.930382   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.930452   21063 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 17:07:13.930522   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.930662   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.930773   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.930897   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.931743   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 17:07:13.931767   21063 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 17:07:13.931785   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.933944   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0815 17:07:13.934253   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.934678   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.934693   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.934840   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.934982   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.935136   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.935212   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.935231   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.935247   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46129
	I0815 17:07:13.935492   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.935683   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.935902   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.936028   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.936345   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.936748   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.936894   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.936913   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.937486   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.937710   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.938647   21063 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0815 17:07:13.939121   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.940069   21063 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0815 17:07:13.940089   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0815 17:07:13.940106   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.940800   21063 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 17:07:13.942039   21063 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 17:07:13.942056   21063 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 17:07:13.942074   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.943882   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.944413   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.944433   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.944678   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.944853   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.945002   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.945136   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.945394   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41089
	I0815 17:07:13.945676   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.946360   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.946376   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.947077   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.947121   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.947301   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.947582   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.947604   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.947785   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.947967   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.948129   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.948282   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.949210   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.949804   21063 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 17:07:13.949822   21063 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 17:07:13.949837   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.951180   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I0815 17:07:13.951505   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36233
	I0815 17:07:13.951659   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.952116   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.952136   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.952200   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.952665   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.952686   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.952748   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.952878   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.952935   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.952963   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.953237   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.953309   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.953329   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.953497   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.953728   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.953880   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.954017   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.954266   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37045
	I0815 17:07:13.954649   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.954729   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.955101   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.955494   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.955516   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.955864   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.956039   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.956703   21063 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 17:07:13.956705   21063 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 17:07:13.957905   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42355
	I0815 17:07:13.958039   21063 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 17:07:13.958056   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 17:07:13.958073   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.958099   21063 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 17:07:13.958112   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 17:07:13.958128   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.958212   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:13.959207   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:13.959225   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:13.960570   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:13.960936   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:13.961674   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.961679   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.962212   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.962232   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.962235   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.962249   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.962310   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.962314   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.962478   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.962499   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.962711   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.962744   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.962902   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.962939   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:13.962903   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:13.964630   21063 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W0815 17:07:13.965638   21063 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47196->192.168.39.200:22: read: connection reset by peer
	I0815 17:07:13.965664   21063 retry.go:31] will retry after 142.710304ms: ssh: handshake failed: read tcp 192.168.39.1:47196->192.168.39.200:22: read: connection reset by peer
	I0815 17:07:13.967200   21063 out.go:177]   - Using image docker.io/busybox:stable
	I0815 17:07:13.968604   21063 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 17:07:13.968618   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 17:07:13.968631   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:13.973812   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:13.973812   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.973870   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:13.973883   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:13.974034   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:13.974194   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:13.974329   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	W0815 17:07:14.110646   21063 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:47208->192.168.39.200:22: read: connection reset by peer
	I0815 17:07:14.110674   21063 retry.go:31] will retry after 445.724768ms: ssh: handshake failed: read tcp 192.168.39.1:47208->192.168.39.200:22: read: connection reset by peer
	I0815 17:07:14.262372   21063 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:07:14.262543   21063 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 17:07:14.375129   21063 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 17:07:14.375158   21063 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 17:07:14.384985   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 17:07:14.385011   21063 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 17:07:14.412926   21063 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 17:07:14.412948   21063 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 17:07:14.428757   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 17:07:14.443500   21063 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 17:07:14.443517   21063 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 17:07:14.458904   21063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 17:07:14.458923   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 17:07:14.461748   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 17:07:14.486682   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:07:14.492416   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 17:07:14.516362   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 17:07:14.537671   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 17:07:14.592669   21063 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 17:07:14.592698   21063 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 17:07:14.611198   21063 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0815 17:07:14.611219   21063 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0815 17:07:14.652036   21063 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 17:07:14.652057   21063 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 17:07:14.671402   21063 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 17:07:14.671420   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 17:07:14.705166   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 17:07:14.705187   21063 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 17:07:14.764005   21063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 17:07:14.764022   21063 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 17:07:14.764909   21063 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 17:07:14.764930   21063 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 17:07:14.794923   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 17:07:14.797882   21063 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 17:07:14.797909   21063 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 17:07:14.850865   21063 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 17:07:14.850893   21063 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 17:07:14.864941   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 17:07:14.864966   21063 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 17:07:14.950936   21063 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 17:07:14.950957   21063 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0815 17:07:15.071809   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 17:07:15.071833   21063 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 17:07:15.080435   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 17:07:15.081262   21063 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 17:07:15.081280   21063 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 17:07:15.090764   21063 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:07:15.090786   21063 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 17:07:15.121111   21063 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 17:07:15.121139   21063 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 17:07:15.145060   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 17:07:15.145089   21063 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 17:07:15.214561   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0815 17:07:15.255160   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:07:15.288108   21063 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 17:07:15.288132   21063 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 17:07:15.299760   21063 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:07:15.299779   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 17:07:15.305822   21063 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 17:07:15.305843   21063 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 17:07:15.407280   21063 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 17:07:15.407338   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 17:07:15.427052   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:07:15.508592   21063 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 17:07:15.508622   21063 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 17:07:15.567781   21063 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 17:07:15.567801   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 17:07:15.602699   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 17:07:15.762719   21063 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 17:07:15.762750   21063 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 17:07:15.784906   21063 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 17:07:15.784931   21063 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 17:07:15.968366   21063 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 17:07:15.968392   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 17:07:16.032931   21063 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 17:07:16.032957   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 17:07:16.101899   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 17:07:16.169209   21063 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 17:07:16.169239   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 17:07:16.382155   21063 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 17:07:16.382182   21063 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 17:07:16.589610   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 17:07:16.633092   21063 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.370682401s)
	I0815 17:07:16.633128   21063 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.37054985s)
	I0815 17:07:16.633151   21063 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0815 17:07:16.633999   21063 node_ready.go:35] waiting up to 6m0s for node "addons-973562" to be "Ready" ...
	I0815 17:07:16.641592   21063 node_ready.go:49] node "addons-973562" has status "Ready":"True"
	I0815 17:07:16.641614   21063 node_ready.go:38] duration metric: took 7.591501ms for node "addons-973562" to be "Ready" ...
	I0815 17:07:16.641624   21063 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:07:16.707336   21063 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:17.181989   21063 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-973562" context rescaled to 1 replicas
	I0815 17:07:18.755881   21063 pod_ready.go:103] pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:20.968333   21063 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 17:07:20.968368   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:20.971622   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:20.972045   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:20.972072   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:20.972245   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:20.972451   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:20.972629   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:20.972761   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:21.298553   21063 pod_ready.go:103] pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:21.395461   21063 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 17:07:21.467782   21063 addons.go:234] Setting addon gcp-auth=true in "addons-973562"
	I0815 17:07:21.467835   21063 host.go:66] Checking if "addons-973562" exists ...
	I0815 17:07:21.468182   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:21.468209   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:21.483908   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44593
	I0815 17:07:21.484353   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:21.484846   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:21.484871   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:21.485191   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:21.485656   21063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:07:21.485696   21063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:07:21.500871   21063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39727
	I0815 17:07:21.501265   21063 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:07:21.501706   21063 main.go:141] libmachine: Using API Version  1
	I0815 17:07:21.501723   21063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:07:21.502023   21063 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:07:21.502227   21063 main.go:141] libmachine: (addons-973562) Calling .GetState
	I0815 17:07:21.503726   21063 main.go:141] libmachine: (addons-973562) Calling .DriverName
	I0815 17:07:21.503965   21063 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 17:07:21.503992   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHHostname
	I0815 17:07:21.506618   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:21.506947   21063 main.go:141] libmachine: (addons-973562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:0b:0e", ip: ""} in network mk-addons-973562: {Iface:virbr1 ExpiryTime:2024-08-15 18:06:40 +0000 UTC Type:0 Mac:52:54:00:71:0b:0e Iaid: IPaddr:192.168.39.200 Prefix:24 Hostname:addons-973562 Clientid:01:52:54:00:71:0b:0e}
	I0815 17:07:21.506982   21063 main.go:141] libmachine: (addons-973562) DBG | domain addons-973562 has defined IP address 192.168.39.200 and MAC address 52:54:00:71:0b:0e in network mk-addons-973562
	I0815 17:07:21.507113   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHPort
	I0815 17:07:21.507273   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHKeyPath
	I0815 17:07:21.507434   21063 main.go:141] libmachine: (addons-973562) Calling .GetSSHUsername
	I0815 17:07:21.507680   21063 sshutil.go:53] new ssh client: &{IP:192.168.39.200 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/addons-973562/id_rsa Username:docker}
	I0815 17:07:22.485649   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.023875475s)
	I0815 17:07:22.485706   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.485717   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.485730   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.999018414s)
	I0815 17:07:22.485767   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.056982201s)
	I0815 17:07:22.485797   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.485808   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.485838   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.993396709s)
	I0815 17:07:22.485772   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.485860   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.485887   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.969504686s)
	I0815 17:07:22.485906   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.485915   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486004   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.94830673s)
	I0815 17:07:22.486065   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486073   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486076   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486082   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486154   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.691207161s)
	I0815 17:07:22.486188   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486199   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486276   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.405820618s)
	I0815 17:07:22.486290   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486298   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486334   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486357   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486368   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486375   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486422   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.231226215s)
	I0815 17:07:22.486433   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486438   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486442   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486447   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486451   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486458   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486506   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486514   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486522   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486529   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486548   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.486556   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486565   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486572   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486579   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486598   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486606   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486614   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486621   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486651   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.059572098s)
	I0815 17:07:22.486667   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	W0815 17:07:22.486679   21063 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 17:07:22.486689   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486700   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486710   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486709   21063 retry.go:31] will retry after 177.288932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 17:07:22.486716   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486358   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.271773766s)
	I0815 17:07:22.486734   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486743   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486755   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.486776   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486782   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.486789   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.486795   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.486851   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.486874   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.486881   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.487553   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.487580   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.487587   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.487595   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.487602   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.487651   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.487668   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.487677   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.487684   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.487690   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.487725   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.487741   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.487748   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.487756   21063 addons.go:475] Verifying addon registry=true in "addons-973562"
	I0815 17:07:22.487941   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.487964   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.487971   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.488126   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.488145   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.488165   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.488180   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.488248   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.488276   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.488285   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.488477   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.488519   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.488527   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.490099   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.490124   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.490131   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.490139   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.490145   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.490198   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.490215   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.490221   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.490229   21063 addons.go:475] Verifying addon metrics-server=true in "addons-973562"
	I0815 17:07:22.490704   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.490732   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.490740   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.490757   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.490768   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.490776   21063 addons.go:475] Verifying addon ingress=true in "addons-973562"
	I0815 17:07:22.490827   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.888079663s)
	I0815 17:07:22.491068   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.491081   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.490870   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.490891   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.491137   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.490901   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.388968281s)
	I0815 17:07:22.491180   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.491188   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.491358   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.491368   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.491376   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.491383   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.491483   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.492668   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.492681   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.492880   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.492976   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.492992   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.493000   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.493182   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.493195   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.493323   21063 out.go:177] * Verifying ingress addon...
	I0815 17:07:22.493361   21063 out.go:177] * Verifying registry addon...
	I0815 17:07:22.494179   21063 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-973562 service yakd-dashboard -n yakd-dashboard
	
	I0815 17:07:22.495634   21063 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 17:07:22.495717   21063 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0815 17:07:22.506574   21063 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 17:07:22.506603   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:22.508811   21063 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 17:07:22.508828   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:22.518529   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.518546   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.518768   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:22.518807   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.518824   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	W0815 17:07:22.518912   21063 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0815 17:07:22.523257   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:22.523271   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:22.523522   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:22.523542   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:22.664518   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:07:23.011574   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:23.011757   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:23.156665   21063 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.652676046s)
	I0815 17:07:23.156692   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.56698888s)
	I0815 17:07:23.156750   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:23.156767   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:23.157221   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:23.157236   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:23.157249   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:23.157266   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:23.157275   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:23.157583   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:23.157606   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:23.157622   21063 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-973562"
	I0815 17:07:23.158180   21063 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:07:23.159036   21063 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 17:07:23.160431   21063 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 17:07:23.161217   21063 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 17:07:23.161587   21063 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 17:07:23.161607   21063 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 17:07:23.199800   21063 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 17:07:23.199821   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:23.265781   21063 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 17:07:23.265802   21063 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 17:07:23.366073   21063 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 17:07:23.366092   21063 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 17:07:23.436364   21063 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 17:07:23.501663   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:23.502623   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:23.835088   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:23.850569   21063 pod_ready.go:103] pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:24.000587   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:24.001655   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:24.166701   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:24.500942   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:24.503795   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:24.670156   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:24.773014   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.108457579s)
	I0815 17:07:24.773056   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:24.773070   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:24.773324   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:24.773338   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:24.773380   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:24.773396   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:24.773407   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:24.773728   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:24.773751   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:24.773754   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:25.008814   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:25.009134   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:25.186592   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:25.306074   21063 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.869671841s)
	I0815 17:07:25.306122   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:25.306138   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:25.306396   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:25.306445   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:25.306454   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:25.306468   21063 main.go:141] libmachine: Making call to close driver server
	I0815 17:07:25.306476   21063 main.go:141] libmachine: (addons-973562) Calling .Close
	I0815 17:07:25.306713   21063 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:07:25.306727   21063 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:07:25.306810   21063 main.go:141] libmachine: (addons-973562) DBG | Closing plugin on server side
	I0815 17:07:25.308442   21063 addons.go:475] Verifying addon gcp-auth=true in "addons-973562"
	I0815 17:07:25.309945   21063 out.go:177] * Verifying gcp-auth addon...
	I0815 17:07:25.311924   21063 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 17:07:25.316520   21063 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 17:07:25.316537   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:25.530736   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:25.534704   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:25.665365   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:25.816849   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:26.000718   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:26.001062   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:26.166146   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:26.214138   21063 pod_ready.go:98] pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:25 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.200 HostIPs:[{IP:192.168.39
.200}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-15 17:07:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-15 17:07:18 +0000 UTC,FinishedAt:2024-08-15 17:07:24 +0000 UTC,ContainerID:cri-o://5b02889d8198258a7ed67e8550b23ee65d86cd7d63e350ac76b79256c1b4d57a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://5b02889d8198258a7ed67e8550b23ee65d86cd7d63e350ac76b79256c1b4d57a Started:0xc001f4ca60 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001c8dc40} {Name:kube-api-access-mb8pm MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001c8dc50}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0815 17:07:26.214173   21063 pod_ready.go:82] duration metric: took 9.506796195s for pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace to be "Ready" ...
	E0815 17:07:26.214186   21063 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-g8w79" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:25 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 17:07:14 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.200 HostIPs:[{IP:192.168.39.200}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-08-15 17:07:14 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-15 17:07:18 +0000 UTC,FinishedAt:2024-08-15 17:07:24 +0000 UTC,ContainerID:cri-o://5b02889d8198258a7ed67e8550b23ee65d86cd7d63e350ac76b79256c1b4d57a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://5b02889d8198258a7ed67e8550b23ee65d86cd7d63e350ac76b79256c1b4d57a Started:0xc001f4ca60 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001c8dc40} {Name:kube-api-access-mb8pm MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001c8dc50}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0815 17:07:26.214197   21063 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mpjgp" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.224574   21063 pod_ready.go:93] pod "coredns-6f6b679f8f-mpjgp" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:26.224592   21063 pod_ready.go:82] duration metric: took 10.386648ms for pod "coredns-6f6b679f8f-mpjgp" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.224600   21063 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.233555   21063 pod_ready.go:93] pod "etcd-addons-973562" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:26.233571   21063 pod_ready.go:82] duration metric: took 8.966544ms for pod "etcd-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.233581   21063 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.244554   21063 pod_ready.go:93] pod "kube-apiserver-addons-973562" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:26.244573   21063 pod_ready.go:82] duration metric: took 10.985949ms for pod "kube-apiserver-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.244581   21063 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.249467   21063 pod_ready.go:93] pod "kube-controller-manager-addons-973562" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:26.249503   21063 pod_ready.go:82] duration metric: took 4.91574ms for pod "kube-controller-manager-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.249510   21063 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9zjlq" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.315436   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:26.500411   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:26.501068   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:26.611330   21063 pod_ready.go:93] pod "kube-proxy-9zjlq" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:26.611358   21063 pod_ready.go:82] duration metric: took 361.840339ms for pod "kube-proxy-9zjlq" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.611372   21063 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:26.666206   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:26.815065   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:27.000977   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:27.002625   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:27.012031   21063 pod_ready.go:93] pod "kube-scheduler-addons-973562" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:27.012052   21063 pod_ready.go:82] duration metric: took 400.671098ms for pod "kube-scheduler-addons-973562" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:27.012065   21063 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-9rkx2" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:27.167202   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:27.316701   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:27.500196   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:27.500953   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:27.666463   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:27.814934   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:27.999909   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:28.000941   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:28.165798   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:28.314834   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:28.499555   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:28.499964   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:28.666757   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:28.816318   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:29.000307   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:29.000349   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:29.024541   21063 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-9rkx2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:29.420067   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:29.422397   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:29.501199   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:29.501683   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:29.665506   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:29.815560   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:30.001003   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:30.001162   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:30.166171   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:30.315552   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:30.504040   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:30.504409   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:30.666672   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:30.815379   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:30.999702   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:31.000071   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:31.165439   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:31.315682   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:31.512681   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:31.513116   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:31.523120   21063 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-9rkx2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:07:31.666324   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:31.815858   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:32.000054   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:32.000454   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:32.167309   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:32.314688   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:32.500418   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:32.502128   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:32.667134   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:32.815203   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:33.000371   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:33.001099   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:33.166087   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:33.315027   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:33.505916   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:33.506371   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:33.525350   21063 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-9rkx2" in "kube-system" namespace has status "Ready":"True"
	I0815 17:07:33.525374   21063 pod_ready.go:82] duration metric: took 6.513301495s for pod "nvidia-device-plugin-daemonset-9rkx2" in "kube-system" namespace to be "Ready" ...
	I0815 17:07:33.525384   21063 pod_ready.go:39] duration metric: took 16.883746774s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:07:33.525406   21063 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:07:33.525469   21063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:07:33.542585   21063 api_server.go:72] duration metric: took 19.754118352s to wait for apiserver process to appear ...
	I0815 17:07:33.542605   21063 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:07:33.542622   21063 api_server.go:253] Checking apiserver healthz at https://192.168.39.200:8443/healthz ...
	I0815 17:07:33.547505   21063 api_server.go:279] https://192.168.39.200:8443/healthz returned 200:
	ok
	I0815 17:07:33.548309   21063 api_server.go:141] control plane version: v1.31.0
	I0815 17:07:33.548328   21063 api_server.go:131] duration metric: took 5.716889ms to wait for apiserver health ...
	I0815 17:07:33.548336   21063 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 17:07:33.557378   21063 system_pods.go:59] 18 kube-system pods found
	I0815 17:07:33.557400   21063 system_pods.go:61] "coredns-6f6b679f8f-mpjgp" [a9818a08-6d11-41fe-81d9-afed636031df] Running
	I0815 17:07:33.557409   21063 system_pods.go:61] "csi-hostpath-attacher-0" [596b55e2-5cc7-4818-9e03-9e5bc52c081a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0815 17:07:33.557417   21063 system_pods.go:61] "csi-hostpath-resizer-0" [090d6c78-cb3b-44b5-b749-f53c4ec2fd5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0815 17:07:33.557425   21063 system_pods.go:61] "csi-hostpathplugin-csfg8" [0b7bd1d3-48f6-4f63-b5d1-bb152345a4f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0815 17:07:33.557429   21063 system_pods.go:61] "etcd-addons-973562" [27923b84-f63e-402c-b3b6-f21c39b7d672] Running
	I0815 17:07:33.557433   21063 system_pods.go:61] "kube-apiserver-addons-973562" [72f2bb55-2489-43d7-8831-425ddcab1c67] Running
	I0815 17:07:33.557440   21063 system_pods.go:61] "kube-controller-manager-addons-973562" [0f3e0bf9-94c4-4d47-8e4b-c3aacd43a567] Running
	I0815 17:07:33.557445   21063 system_pods.go:61] "kube-ingress-dns-minikube" [af9ffb5d-8172-478e-bf4f-ce5fafaba75b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0815 17:07:33.557452   21063 system_pods.go:61] "kube-proxy-9zjlq" [0ade0f95-ff6d-402e-8491-a63a6c75767c] Running
	I0815 17:07:33.557457   21063 system_pods.go:61] "kube-scheduler-addons-973562" [2aa94285-4622-46ad-a181-ed22ad8cbe17] Running
	I0815 17:07:33.557462   21063 system_pods.go:61] "metrics-server-8988944d9-2rpw7" [5ccb0984-23af-4380-b4e7-c266d3917b45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 17:07:33.557468   21063 system_pods.go:61] "nvidia-device-plugin-daemonset-9rkx2" [4d297fcf-2d70-4adb-b547-f8b1dbe59d7b] Running
	I0815 17:07:33.557474   21063 system_pods.go:61] "registry-6fb4cdfc84-svjjj" [c96c1884-ddbb-4955-b9b8-6c11e6a0e893] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0815 17:07:33.557481   21063 system_pods.go:61] "registry-proxy-mjdz8" [e4645394-eb8e-49e3-bab8-fb41e2aaebdf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0815 17:07:33.557490   21063 system_pods.go:61] "snapshot-controller-56fcc65765-9nhk7" [99bc41a8-780f-4b5e-aaec-4b90a782e8e6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 17:07:33.557497   21063 system_pods.go:61] "snapshot-controller-56fcc65765-wcf7d" [7152eb2d-aaf6-41a7-af66-dc316576c773] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 17:07:33.557503   21063 system_pods.go:61] "storage-provisioner" [c3a49d08-7c2e-4333-bde2-165983d8812b] Running
	I0815 17:07:33.557509   21063 system_pods.go:61] "tiller-deploy-b48cc5f79-4z6lg" [e1606621-5c24-447f-bc36-4b807d48e67a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0815 17:07:33.557516   21063 system_pods.go:74] duration metric: took 9.175841ms to wait for pod list to return data ...
	I0815 17:07:33.557522   21063 default_sa.go:34] waiting for default service account to be created ...
	I0815 17:07:33.559457   21063 default_sa.go:45] found service account: "default"
	I0815 17:07:33.559470   21063 default_sa.go:55] duration metric: took 1.940916ms for default service account to be created ...
	I0815 17:07:33.559476   21063 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 17:07:33.566682   21063 system_pods.go:86] 18 kube-system pods found
	I0815 17:07:33.566709   21063 system_pods.go:89] "coredns-6f6b679f8f-mpjgp" [a9818a08-6d11-41fe-81d9-afed636031df] Running
	I0815 17:07:33.566722   21063 system_pods.go:89] "csi-hostpath-attacher-0" [596b55e2-5cc7-4818-9e03-9e5bc52c081a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0815 17:07:33.566730   21063 system_pods.go:89] "csi-hostpath-resizer-0" [090d6c78-cb3b-44b5-b749-f53c4ec2fd5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0815 17:07:33.566742   21063 system_pods.go:89] "csi-hostpathplugin-csfg8" [0b7bd1d3-48f6-4f63-b5d1-bb152345a4f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0815 17:07:33.566752   21063 system_pods.go:89] "etcd-addons-973562" [27923b84-f63e-402c-b3b6-f21c39b7d672] Running
	I0815 17:07:33.566759   21063 system_pods.go:89] "kube-apiserver-addons-973562" [72f2bb55-2489-43d7-8831-425ddcab1c67] Running
	I0815 17:07:33.566766   21063 system_pods.go:89] "kube-controller-manager-addons-973562" [0f3e0bf9-94c4-4d47-8e4b-c3aacd43a567] Running
	I0815 17:07:33.566777   21063 system_pods.go:89] "kube-ingress-dns-minikube" [af9ffb5d-8172-478e-bf4f-ce5fafaba75b] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0815 17:07:33.566781   21063 system_pods.go:89] "kube-proxy-9zjlq" [0ade0f95-ff6d-402e-8491-a63a6c75767c] Running
	I0815 17:07:33.566785   21063 system_pods.go:89] "kube-scheduler-addons-973562" [2aa94285-4622-46ad-a181-ed22ad8cbe17] Running
	I0815 17:07:33.566792   21063 system_pods.go:89] "metrics-server-8988944d9-2rpw7" [5ccb0984-23af-4380-b4e7-c266d3917b45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 17:07:33.566801   21063 system_pods.go:89] "nvidia-device-plugin-daemonset-9rkx2" [4d297fcf-2d70-4adb-b547-f8b1dbe59d7b] Running
	I0815 17:07:33.566810   21063 system_pods.go:89] "registry-6fb4cdfc84-svjjj" [c96c1884-ddbb-4955-b9b8-6c11e6a0e893] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0815 17:07:33.566821   21063 system_pods.go:89] "registry-proxy-mjdz8" [e4645394-eb8e-49e3-bab8-fb41e2aaebdf] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0815 17:07:33.566832   21063 system_pods.go:89] "snapshot-controller-56fcc65765-9nhk7" [99bc41a8-780f-4b5e-aaec-4b90a782e8e6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 17:07:33.566845   21063 system_pods.go:89] "snapshot-controller-56fcc65765-wcf7d" [7152eb2d-aaf6-41a7-af66-dc316576c773] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 17:07:33.566851   21063 system_pods.go:89] "storage-provisioner" [c3a49d08-7c2e-4333-bde2-165983d8812b] Running
	I0815 17:07:33.566862   21063 system_pods.go:89] "tiller-deploy-b48cc5f79-4z6lg" [e1606621-5c24-447f-bc36-4b807d48e67a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0815 17:07:33.566870   21063 system_pods.go:126] duration metric: took 7.387465ms to wait for k8s-apps to be running ...
	I0815 17:07:33.566883   21063 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 17:07:33.566932   21063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:07:33.581827   21063 system_svc.go:56] duration metric: took 14.935668ms WaitForService to wait for kubelet
	I0815 17:07:33.581856   21063 kubeadm.go:582] duration metric: took 19.793392359s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:07:33.581874   21063 node_conditions.go:102] verifying NodePressure condition ...
	I0815 17:07:33.584624   21063 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 17:07:33.584654   21063 node_conditions.go:123] node cpu capacity is 2
	I0815 17:07:33.584665   21063 node_conditions.go:105] duration metric: took 2.787137ms to run NodePressure ...
	I0815 17:07:33.584675   21063 start.go:241] waiting for startup goroutines ...
	I0815 17:07:33.665452   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:33.815843   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:33.999380   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:33.999608   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:34.165693   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:34.315006   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:34.499373   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:34.499618   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:34.667363   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:34.815957   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:34.999767   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:35.002138   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:35.166370   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:35.315048   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:35.500576   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:35.500720   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:35.666133   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:35.815309   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:36.000617   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:36.001360   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:36.165638   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:36.314651   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:36.500717   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:36.500831   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:36.665099   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:36.815438   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:37.002830   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:37.003134   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:37.166677   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:37.315904   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:37.499865   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:37.500173   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:37.665663   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:37.816227   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:38.000273   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:38.000974   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:38.165676   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:38.631984   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:38.632195   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:38.632926   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:38.665863   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:38.815651   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:39.000312   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:39.000704   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:39.167468   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:39.315762   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:39.501563   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:39.501698   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:39.666238   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:39.815417   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:40.000155   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:40.000774   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:40.165380   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:40.318534   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:40.501153   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:40.501494   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:40.665944   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:40.815252   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:41.000122   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:41.000344   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:41.169716   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:41.315261   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:41.500813   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:41.500890   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:41.665039   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:41.815397   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:42.000099   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:42.000227   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:42.166919   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:42.315320   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:42.500608   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:42.500874   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:42.667082   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:42.815675   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:43.001636   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:43.003021   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:43.170559   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:43.315865   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:43.501748   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:43.502177   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:43.666082   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:43.815557   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:44.000808   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:44.001071   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:44.166281   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:44.315687   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:44.499900   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:44.500574   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:44.666048   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:44.815108   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:45.001212   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:45.001474   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:45.167095   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:45.315713   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:45.500284   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:45.500823   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:45.666449   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:45.815773   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:45.999951   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:45.999962   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:46.165766   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:46.314776   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:46.499526   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:46.499710   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:46.666316   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:46.815689   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:47.001229   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:47.001904   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:47.166692   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:47.315772   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:47.500704   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:47.501875   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:47.666002   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:47.825700   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:48.002017   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:48.003103   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:48.165376   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:48.315693   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:48.502452   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:48.502895   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:48.665552   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:48.815422   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:49.000249   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:49.000972   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:49.166350   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:49.315482   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:49.500458   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:49.503695   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:49.666748   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:49.815489   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:49.999475   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:50.001081   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:50.165633   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:50.316181   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:50.500219   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:50.501475   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:50.666210   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:50.815638   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:51.000816   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:51.000884   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:51.165283   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:51.315660   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:51.500694   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:51.500992   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:51.665321   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:51.815655   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:52.000587   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:52.000679   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:52.166333   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:52.317715   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:52.500465   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:52.500923   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:52.666476   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:52.815822   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:53.001717   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:53.001961   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:53.305973   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:53.323308   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:53.499786   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:53.500331   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:53.666240   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:53.815731   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:54.000305   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:54.000593   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:54.166084   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:54.460270   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:54.500474   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:54.502046   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:54.666222   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:54.815203   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:54.999990   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:55.000714   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:55.168316   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:55.316126   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:55.502225   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:55.502778   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:55.668669   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:55.816703   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:56.000303   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:56.000640   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:56.166211   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:56.315490   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:56.500085   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:56.500622   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:56.666165   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:56.814860   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:57.000375   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:57.000387   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:57.166336   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:57.317231   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:57.500839   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:57.501286   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:57.665782   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:57.815148   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:57.999632   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:58.000214   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:58.166866   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:58.315876   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:58.500311   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:58.500476   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:58.665884   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:58.837603   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:59.001500   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:59.002320   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:59.166189   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:59.315374   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:59.500624   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:07:59.501502   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:59.666136   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:59.815813   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:00.002221   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:00.002372   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:00.165465   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:00.316161   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:00.499981   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:00.501004   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:00.667219   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:00.821503   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:01.000978   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:01.002983   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:01.166489   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:01.315348   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:01.505509   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:01.505845   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:01.665647   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:01.815695   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:02.001279   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:02.001687   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:02.165956   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:02.315213   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:02.500578   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:02.501438   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:02.666335   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:02.815461   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:03.001190   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:03.001530   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:03.166453   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:03.315612   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:03.500520   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:03.501307   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:03.665449   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:03.816018   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:04.000637   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:04.001236   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:04.165049   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:04.315527   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:04.501100   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:04.501280   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:04.666547   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:04.816463   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:05.000801   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:05.001745   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:05.166036   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:05.315018   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:05.500521   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:05.500638   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:05.664979   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:05.815054   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:06.000008   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:06.000294   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:06.167075   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:06.315587   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:06.500968   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:06.501083   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:06.665821   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:06.815159   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:07.000263   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:07.001020   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:07.166623   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:07.316128   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:07.501207   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:07.501282   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:07.666602   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:07.815974   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:08.000731   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:08.000959   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:08.165763   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:08.315379   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:08.501434   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:08.501914   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:08.666060   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:08.815594   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:09.000679   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:09.001702   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:09.165737   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:09.315513   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:09.500651   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:09.501377   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:09.666296   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:09.815411   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:10.138171   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:10.143269   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:10.241957   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:10.316285   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:10.500877   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:10.501180   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:10.665944   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:10.815569   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:11.000283   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:11.001524   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:11.167027   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:11.315495   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:11.500590   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:11.501360   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:11.786829   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:11.815069   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:12.000899   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:12.001057   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:12.165481   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:12.315917   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:12.500447   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:12.501019   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:12.666277   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:12.815745   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:12.999950   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:13.000790   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:13.166332   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:13.315391   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:13.500652   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:08:13.501245   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:13.665079   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:13.815516   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:14.000917   21063 kapi.go:107] duration metric: took 51.505280014s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 17:08:14.000992   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:14.166001   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:14.315031   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:14.499755   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:14.666132   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:14.816203   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:15.000860   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:15.165300   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:15.315326   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:15.500227   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:15.665556   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:15.816643   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:16.001281   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:16.165884   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:16.315535   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:16.500169   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:16.665610   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:16.815005   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:16.999450   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:17.166581   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:17.315475   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:17.500439   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:17.666976   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:18.033852   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:18.034249   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:18.166670   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:18.315487   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:18.500375   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:18.666536   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:18.815489   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:19.000261   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:19.165679   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:19.315021   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:19.500842   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:19.666657   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:19.815392   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:20.004464   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:20.166360   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:20.315952   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:20.501853   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:20.665533   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:20.815930   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:20.999713   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:21.166601   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:21.316422   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:21.500246   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:21.665856   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:21.815461   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:21.999814   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:22.166138   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:22.315563   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:22.500951   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:22.665173   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:22.815823   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:23.000474   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:23.166340   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:23.315859   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:23.499441   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:23.666692   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:23.815305   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:24.000647   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:24.167984   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:24.315134   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:24.499663   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:24.666315   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:24.815829   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:24.999818   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:25.165800   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:25.315354   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:25.499478   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:25.666064   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:25.815594   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:26.000725   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:26.167004   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:26.314993   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:26.500581   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:26.666415   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:26.815280   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:27.000225   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:27.165278   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:27.315578   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:27.499950   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:27.665617   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:27.816130   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:28.000706   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:28.166783   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:28.315625   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:28.500783   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:28.665525   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:28.815854   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:28.999512   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:29.166546   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:29.316139   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:29.500409   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:29.665745   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:29.814995   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:29.999973   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:30.165953   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:30.316291   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:30.500657   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:30.667129   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:30.816183   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:31.000064   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:31.165859   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:31.315313   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:31.500025   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:31.665727   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:31.814901   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:31.999550   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:32.166340   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:32.315710   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:32.500597   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:32.666238   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:32.815468   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:33.000186   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:33.165657   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:33.316673   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:33.500095   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:33.665893   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:33.815354   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:34.000146   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:34.166522   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:34.315930   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:34.499515   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:34.666191   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:34.816377   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:35.000133   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:35.165869   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:35.315606   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:35.500064   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:35.665751   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:35.815636   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:36.000525   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:36.166242   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:36.315580   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:36.500108   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:36.665258   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:36.815864   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:37.000607   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:37.165617   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:37.316410   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:37.500436   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:37.665934   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:37.815368   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:38.000035   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:38.165582   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:38.316069   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:38.499898   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:38.665337   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:38.815738   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:39.002024   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:39.165962   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:39.315585   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:39.500207   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:39.666242   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:39.815564   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:40.001286   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:40.166190   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:40.316531   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:40.500640   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:40.666177   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:40.815491   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:41.000469   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:41.166537   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:41.316935   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:41.500041   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:41.665428   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:41.815574   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:42.000956   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:42.165430   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:42.316198   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:42.500051   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:42.665713   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:42.815847   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:43.001030   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:43.181275   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:43.317125   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:43.500025   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:43.665709   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:43.815681   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:44.000352   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:44.166037   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:44.315511   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:44.500701   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:44.666441   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:44.816872   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:44.999732   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:45.169415   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:45.315673   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:45.500156   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:45.665860   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:45.816193   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:46.000102   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:46.165764   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:46.315166   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:46.499700   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:46.665831   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:46.815180   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:46.999680   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:47.166718   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:47.316279   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:47.500191   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:47.665706   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:47.816373   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:48.000331   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:48.167980   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:48.319012   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:48.507455   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:48.665819   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:48.815240   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:49.000063   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:49.165313   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:49.315308   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:49.500125   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:49.666668   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:49.815553   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:50.002363   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:50.165850   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:50.324008   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:50.500860   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:50.666616   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:50.815310   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:51.000307   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:51.165977   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:51.315507   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:51.500237   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:51.665790   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:51.815352   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:52.000457   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:52.165966   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:52.324200   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:52.500921   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:52.664954   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:52.815016   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:52.999756   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:53.169189   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:53.316071   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:53.499957   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:53.665930   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:53.815497   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:54.000162   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:54.165222   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:54.316091   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:54.500361   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:54.667071   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:54.816323   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:55.001105   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:55.165799   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:55.315749   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:55.500141   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:55.665748   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:55.816745   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:56.001291   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:56.167152   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:56.315865   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:56.510826   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:56.669783   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:56.815718   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:57.003786   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:57.164989   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:57.317763   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:57.500265   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:57.670388   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:57.816335   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:58.001772   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:58.167806   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:58.314966   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:58.501399   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:58.666094   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:58.815874   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:59.000217   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:59.166020   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:59.315832   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:59.500468   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:08:59.669975   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:08:59.816943   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:00.000388   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:00.168330   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:00.315258   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:00.501331   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:00.666396   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:00.817461   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:01.000806   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:01.166827   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:01.315501   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:01.500055   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:01.665833   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:01.814977   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:01.999873   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:02.166166   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:02.315738   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:02.653586   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:02.758409   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:02.855929   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:02.999980   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:03.165308   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:03.323118   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:03.500369   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:03.665756   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:03.816027   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:04.000155   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:04.166398   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:09:04.316909   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:04.500578   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:04.665834   21063 kapi.go:107] duration metric: took 1m41.504612749s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 17:09:04.815886   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:05.001503   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:05.316089   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:05.499996   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:05.815639   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:06.001044   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:06.315985   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:06.499733   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:06.816066   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:06.999525   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:07.315317   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:07.500033   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:07.817140   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:07.999771   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:08.315759   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:08.500796   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:08.816126   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:08.999612   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:09.315089   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:09.499981   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:09.816704   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:10.002326   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:10.315645   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:10.500022   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:10.815998   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:10.999510   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:11.315324   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:11.500208   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:11.815991   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:11.999602   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:12.315145   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:12.500678   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:12.815980   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:13.000165   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:13.315981   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:13.499921   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:13.815291   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:14.000117   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:14.315799   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:14.500621   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:14.815693   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:15.000636   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:15.315950   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:15.500283   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:15.816394   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:15.999955   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:16.320503   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:16.500803   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:16.816641   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:16.999918   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:17.316138   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:17.499808   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:17.816168   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:17.999868   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:18.315709   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:18.500113   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:18.816285   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:18.999986   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:19.316023   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:19.499683   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:19.815607   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:20.000451   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:20.315392   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:20.500441   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:20.815824   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:21.000381   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:21.314957   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:21.500054   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:21.816032   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:22.000437   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:22.316728   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:22.500595   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:22.815501   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:23.000767   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:23.315701   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:23.500217   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:23.815770   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:23.999781   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:24.315597   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:24.500880   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:24.815535   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:25.000548   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:25.315512   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:25.502897   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:25.815971   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:25.999661   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:26.315185   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:26.500202   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:26.816624   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:27.000611   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:27.315007   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:27.499837   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:27.815469   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:28.000365   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:28.315750   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:28.500242   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:28.815826   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:29.000740   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:29.315880   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:29.499891   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:29.815502   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:30.001418   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:30.315775   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:30.500292   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:30.816980   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:30.999970   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:31.315877   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:31.499779   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:31.815788   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:32.001026   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:32.315568   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:32.501217   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:32.816273   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:33.001049   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:33.315587   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:33.500811   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:33.815559   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:34.000015   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:34.316065   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:34.501283   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:34.815005   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:35.000627   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:35.316133   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:35.500221   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:35.816047   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:36.000426   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:36.315106   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:36.499901   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:36.815875   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:37.001260   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:37.316408   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:37.499986   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:37.816671   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:38.000695   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:38.315790   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:38.500541   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:38.815007   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:38.999561   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:39.314809   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:39.500619   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:39.815873   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:40.000471   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:40.321120   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:40.500426   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:40.814985   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:41.000153   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:41.315813   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:41.501656   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:41.816363   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:42.001385   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:42.315655   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:42.501191   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:42.815887   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:43.001619   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:43.316367   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:43.500451   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:43.816230   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:44.000429   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:44.315825   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:44.500462   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:44.815935   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:45.001582   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:45.316166   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:45.501057   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:45.815264   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:46.005795   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:46.315080   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:46.499931   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:46.816746   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:47.002923   21063 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:09:47.316447   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:47.500823   21063 kapi.go:107] duration metric: took 2m25.005103671s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 17:09:47.815289   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:48.318066   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:48.815315   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:49.317556   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:49.815707   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:50.316583   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:50.816623   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:51.448063   21063 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:51.815826   21063 kapi.go:107] duration metric: took 2m26.503899498s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 17:09:51.817643   21063 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-973562 cluster.
	I0815 17:09:51.819034   21063 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 17:09:51.820498   21063 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 17:09:51.821964   21063 out.go:177] * Enabled addons: storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, helm-tiller, cloud-spanner, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0815 17:09:51.823156   21063 addons.go:510] duration metric: took 2m38.03464767s for enable addons: enabled=[storage-provisioner nvidia-device-plugin ingress-dns metrics-server helm-tiller cloud-spanner inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0815 17:09:51.823188   21063 start.go:246] waiting for cluster config update ...
	I0815 17:09:51.823204   21063 start.go:255] writing updated cluster config ...
	I0815 17:09:51.823493   21063 ssh_runner.go:195] Run: rm -f paused
	I0815 17:09:51.877018   21063 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 17:09:51.878700   21063 out.go:177] * Done! kubectl is now configured to use "addons-973562" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.780168457Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:7a1b069065ecad7a63f983efc81506ee2ed8fee5b8af6f86592048ffb906c92c,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-wzp2w,Uid:2579b064-be76-41aa-8fd9-ea64aefd8eed,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723742066353864446,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wzp2w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2579b064-be76-41aa-8fd9-ea64aefd8eed,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T17:14:26.042833185Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:96582e0891be7fe9f967fb5f630132ed5c3ffc44a13d842c5c1ec1631c1e574d,Metadata:&PodSandboxMetadata{Name:nginx,Uid:f83b1404-c3f9-436f-a4fa-c82dd8ac7b90,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1723741862536273508,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f83b1404-c3f9-436f-a4fa-c82dd8ac7b90,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T17:11:02.220282497Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f9f8a5263d0c516e86b6b84024918cb08097232e67a476f79e9dbea80c14ae57,Metadata:&PodSandboxMetadata{Name:headlamp-57fb76fcdb-lt6rm,Uid:7838ea9e-895e-43bc-8be4-9f0d98616812,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723741862301099816,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-57fb76fcdb-lt6rm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 7838ea9e-895e-43bc-8be4-9f0d98616812,pod-template-hash: 57fb76fcdb,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
08-15T17:11:01.993683521Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f0de5f43bd64b4cc01a29d401fb8229c95375fb560e5a42fab13352ee972982e,Metadata:&PodSandboxMetadata{Name:busybox,Uid:d1f14268-bbdd-4b42-8d28-16db4e873bcd,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723741792461162327,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1f14268-bbdd-4b42-8d28-16db4e873bcd,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T17:09:52.145933393Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:336e0b99d7ea11132ef6fbfbc9f451460ecc49a82dac7d859bb7342a38ff8e6e,Metadata:&PodSandboxMetadata{Name:metrics-server-8988944d9-2rpw7,Uid:5ccb0984-23af-4380-b4e7-c266d3917b45,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723741639924517831,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.po
d.name: metrics-server-8988944d9-2rpw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccb0984-23af-4380-b4e7-c266d3917b45,k8s-app: metrics-server,pod-template-hash: 8988944d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T17:07:19.614506130Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2282c836ecde27237e2d5e8607ba405392b28ee4bdb446ea8b9c4bfaca33b1a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c3a49d08-7c2e-4333-bde2-165983d8812b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723741639195440011,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a49d08-7c2e-4333-bde2-165983d8812b,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annota
tions\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-15T17:07:18.572852784Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:803fe16e005170063be38728d5ba3d1bc4abbc2ec159fdf8ad2f85626313f447,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-mpjgp,Uid:a9818a08-6d11-41fe-81d9-afed636031df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723741634433021944,Labels:map[string]string{io.kubernetes.container.
name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-mpjgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9818a08-6d11-41fe-81d9-afed636031df,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T17:07:14.120848742Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ae8398b86515edfade40518c0c7bdba9416748d390628c63f6ce07f7a1f6ef2a,Metadata:&PodSandboxMetadata{Name:kube-proxy-9zjlq,Uid:0ade0f95-ff6d-402e-8491-a63a6c75767c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723741633756021250,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9zjlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ade0f95-ff6d-402e-8491-a63a6c75767c,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T17:07:13.445462397Z,kubernetes.io/config.source: api,},Runtime
Handler:,},&PodSandbox{Id:4a169f6964f2c8a53a64ce41d999fb5f9aa724f42778013e17331a240c0960d9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-973562,Uid:6d7467ad3bb5426d6fd74483911510fb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723741623320829021,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7467ad3bb5426d6fd74483911510fb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.200:8443,kubernetes.io/config.hash: 6d7467ad3bb5426d6fd74483911510fb,kubernetes.io/config.seen: 2024-08-15T17:07:02.840579133Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ebe45e82f021e2ccdbeff23200e9beb031aa6b5eb7d66a86a7503f778a75e650,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-973562,Uid:981e4757113ba2796f2c06755ba75895,Namespace:kube-s
ystem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723741623318898534,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 981e4757113ba2796f2c06755ba75895,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 981e4757113ba2796f2c06755ba75895,kubernetes.io/config.seen: 2024-08-15T17:07:02.840581353Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0a7f6e41dc2b21c3b96753c55c9fdc5182e28f54248a748f2dd0662172f33c00,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-973562,Uid:800eb95466b0525df544e39951ff83ce,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723741623303286388,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800eb95466b05
25df544e39951ff83ce,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 800eb95466b0525df544e39951ff83ce,kubernetes.io/config.seen: 2024-08-15T17:07:02.840555734Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:23ad700d62d0a35d36e49350914eaf004bde64691830dc072e8fc764b628cc5c,Metadata:&PodSandboxMetadata{Name:etcd-addons-973562,Uid:10e1d7dada6ca365bad346bd612c6c16,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723741623294764892,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e1d7dada6ca365bad346bd612c6c16,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.200:2379,kubernetes.io/config.hash: 10e1d7dada6ca365bad346bd612c6c16,kubernetes.io/config.seen: 2024-08-15T17:07:02.840558784Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file
="otel-collector/interceptors.go:74" id=60dc7186-5fb7-4460-bf23-5c3c4a377d0c name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.780930436Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=982f3e72-dec1-4cd0-92de-811a4b45e264 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.780983615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=982f3e72-dec1-4cd0-92de-811a4b45e264 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.781224026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1787e89abb0afeee25c502bc3195e1f7f75942feeaa3b35ff1b3d7f52491058,PodSandboxId:7a1b069065ecad7a63f983efc81506ee2ed8fee5b8af6f86592048ffb906c92c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723742069249252355,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wzp2w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2579b064-be76-41aa-8fd9-ea64aefd8eed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ca963650cd41c4580baf1d6e5d117b855a65c62a34882224efc66db4d9bca0,PodSandboxId:96582e0891be7fe9f967fb5f630132ed5c3ffc44a13d842c5c1ec1631c1e574d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723741927898142792,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f83b1404-c3f9-436f-a4fa-c82dd8ac7b90,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f3d62a377a2f503bec0f3018f57156716152b2b29b1ea99afcb3e6749e528,PodSandboxId:f9f8a5263d0c516e86b6b84024918cb08097232e67a476f79e9dbea80c14ae57,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723741924365921852,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lt6rm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7838ea9e-895e-43bc-8be4-9f0d98616812,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac471d524493b542fb2b5a7f3d5d454c624dcc13df7c946cac15801b10cce2b0,PodSandboxId:f0de5f43bd64b4cc01a29d401fb8229c95375fb560e5a42fab13352ee972982e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723741795552712802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1f14268-bbdd-4b42-8d28-16db4e873bcd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c36779dc514bd45e38edcb986fa83a4e6587d65d56edbd718ba93bf975c332,PodSandboxId:336e0b99d7ea11132ef6fbfbc9f451460ecc49a82dac7d859bb7342a38ff8e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723741680671397099,Labels:map[string]string{io.kubernetes.container.name: metrics-s
erver,io.kubernetes.pod.name: metrics-server-8988944d9-2rpw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccb0984-23af-4380-b4e7-c266d3917b45,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a08fe240691c0b9be06b8345c98eef027070426d147fe5fd30b808dd98b725e,PodSandboxId:a2282c836ecde27237e2d5e8607ba405392b28ee4bdb446ea8b9c4bfaca33b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Cr
eatedAt:1723741640034562028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a49d08-7c2e-4333-bde2-165983d8812b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7dabbdd78f5269bc7a8bf4704238d900673e48be099540c6f354d1996d7171,PodSandboxId:803fe16e005170063be38728d5ba3d1bc4abbc2ec159fdf8ad2f85626313f447,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723741637306610
897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpjgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9818a08-6d11-41fe-81d9-afed636031df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76f57abbb47d25a09ada3d4c9c62d3c1f077dba5cc3555a0c7ee9cdb80b5afe,PodSandboxId:ae8398b86515edfade40518c0c7bdba9416748d390628c63f6ce07f7a1f6ef2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6
494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723741634311918570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9zjlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ade0f95-ff6d-402e-8491-a63a6c75767c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fea5c45564c68d4b07a79b4775bf842c64dccfbaf01ed850f2a4c7738c6dd9,PodSandboxId:23ad700d62d0a35d36e49350914eaf004bde64691830dc072e8fc764b628cc5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723741623573557719,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e1d7dada6ca365bad346bd612c6c16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3534ecea3b438bf44120ffd8e4e6dc0eefcc0893a5383863ad0ddbd1353953b2,PodSandboxId:4a169f6964f2c8a53a64ce41d999fb5f9aa724f42778013e17331a240c0960d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723741623565674642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7467ad3bb5426d6fd74483911510fb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3128463831d397650135e24543103e466ce2084eee25be91527b03cd11840c97,PodSandboxId:0a7f6e41dc2b21c3b96753c55c9fdc5182e28f54248a748f2dd0662172f33c00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897
f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723741623531134743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800eb95466b0525df544e39951ff83ce,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a83258a356d612266f08b866760653c18e1329481cde8025bb1b49412a4784f,PodSandboxId:ebe45e82f021e2ccdbeff23200e9beb031aa6b5eb7d66a86a7503f778a75e650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b158
06c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723741623473873857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 981e4757113ba2796f2c06755ba75895,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=982f3e72-dec1-4cd0-92de-811a4b45e264 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.813519678Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d233f80e-9dbf-46ba-801c-e3d7582bcfde name=/runtime.v1.RuntimeService/Version
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.813596220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d233f80e-9dbf-46ba-801c-e3d7582bcfde name=/runtime.v1.RuntimeService/Version
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.814848819Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=976319ac-74da-48f7-9f5b-387228375136 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.816019940Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742176815994039,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=976319ac-74da-48f7-9f5b-387228375136 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.816573743Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=90db4d9b-a86d-4a3e-8054-87814af2e799 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.816631566Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=90db4d9b-a86d-4a3e-8054-87814af2e799 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.816936198Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1787e89abb0afeee25c502bc3195e1f7f75942feeaa3b35ff1b3d7f52491058,PodSandboxId:7a1b069065ecad7a63f983efc81506ee2ed8fee5b8af6f86592048ffb906c92c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723742069249252355,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wzp2w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2579b064-be76-41aa-8fd9-ea64aefd8eed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ca963650cd41c4580baf1d6e5d117b855a65c62a34882224efc66db4d9bca0,PodSandboxId:96582e0891be7fe9f967fb5f630132ed5c3ffc44a13d842c5c1ec1631c1e574d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723741927898142792,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f83b1404-c3f9-436f-a4fa-c82dd8ac7b90,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f3d62a377a2f503bec0f3018f57156716152b2b29b1ea99afcb3e6749e528,PodSandboxId:f9f8a5263d0c516e86b6b84024918cb08097232e67a476f79e9dbea80c14ae57,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723741924365921852,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lt6rm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7838ea9e-895e-43bc-8be4-9f0d98616812,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac471d524493b542fb2b5a7f3d5d454c624dcc13df7c946cac15801b10cce2b0,PodSandboxId:f0de5f43bd64b4cc01a29d401fb8229c95375fb560e5a42fab13352ee972982e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723741795552712802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1f14268-bbdd-4b42-8d28-16db4e873bcd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c36779dc514bd45e38edcb986fa83a4e6587d65d56edbd718ba93bf975c332,PodSandboxId:336e0b99d7ea11132ef6fbfbc9f451460ecc49a82dac7d859bb7342a38ff8e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723741680671397099,Labels:map[string]string{io.kubernetes.container.name: metrics-s
erver,io.kubernetes.pod.name: metrics-server-8988944d9-2rpw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccb0984-23af-4380-b4e7-c266d3917b45,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a08fe240691c0b9be06b8345c98eef027070426d147fe5fd30b808dd98b725e,PodSandboxId:a2282c836ecde27237e2d5e8607ba405392b28ee4bdb446ea8b9c4bfaca33b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Cr
eatedAt:1723741640034562028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a49d08-7c2e-4333-bde2-165983d8812b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7dabbdd78f5269bc7a8bf4704238d900673e48be099540c6f354d1996d7171,PodSandboxId:803fe16e005170063be38728d5ba3d1bc4abbc2ec159fdf8ad2f85626313f447,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723741637306610
897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpjgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9818a08-6d11-41fe-81d9-afed636031df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76f57abbb47d25a09ada3d4c9c62d3c1f077dba5cc3555a0c7ee9cdb80b5afe,PodSandboxId:ae8398b86515edfade40518c0c7bdba9416748d390628c63f6ce07f7a1f6ef2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6
494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723741634311918570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9zjlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ade0f95-ff6d-402e-8491-a63a6c75767c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fea5c45564c68d4b07a79b4775bf842c64dccfbaf01ed850f2a4c7738c6dd9,PodSandboxId:23ad700d62d0a35d36e49350914eaf004bde64691830dc072e8fc764b628cc5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723741623573557719,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e1d7dada6ca365bad346bd612c6c16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3534ecea3b438bf44120ffd8e4e6dc0eefcc0893a5383863ad0ddbd1353953b2,PodSandboxId:4a169f6964f2c8a53a64ce41d999fb5f9aa724f42778013e17331a240c0960d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723741623565674642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7467ad3bb5426d6fd74483911510fb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3128463831d397650135e24543103e466ce2084eee25be91527b03cd11840c97,PodSandboxId:0a7f6e41dc2b21c3b96753c55c9fdc5182e28f54248a748f2dd0662172f33c00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897
f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723741623531134743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800eb95466b0525df544e39951ff83ce,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a83258a356d612266f08b866760653c18e1329481cde8025bb1b49412a4784f,PodSandboxId:ebe45e82f021e2ccdbeff23200e9beb031aa6b5eb7d66a86a7503f778a75e650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b158
06c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723741623473873857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 981e4757113ba2796f2c06755ba75895,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=90db4d9b-a86d-4a3e-8054-87814af2e799 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.853160072Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=384ef8a5-af09-42f2-94f3-38d06287e841 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.853231695Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=384ef8a5-af09-42f2-94f3-38d06287e841 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.854361136Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2ff277f-97b7-46f4-8451-193d1d803539 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.855619035Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742176855489882,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2ff277f-97b7-46f4-8451-193d1d803539 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.856225099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ccaa93fe-bb81-4ea5-bea5-d23f2c601eab name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.856390365Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ccaa93fe-bb81-4ea5-bea5-d23f2c601eab name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.856684908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1787e89abb0afeee25c502bc3195e1f7f75942feeaa3b35ff1b3d7f52491058,PodSandboxId:7a1b069065ecad7a63f983efc81506ee2ed8fee5b8af6f86592048ffb906c92c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723742069249252355,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wzp2w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2579b064-be76-41aa-8fd9-ea64aefd8eed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ca963650cd41c4580baf1d6e5d117b855a65c62a34882224efc66db4d9bca0,PodSandboxId:96582e0891be7fe9f967fb5f630132ed5c3ffc44a13d842c5c1ec1631c1e574d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723741927898142792,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f83b1404-c3f9-436f-a4fa-c82dd8ac7b90,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f3d62a377a2f503bec0f3018f57156716152b2b29b1ea99afcb3e6749e528,PodSandboxId:f9f8a5263d0c516e86b6b84024918cb08097232e67a476f79e9dbea80c14ae57,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723741924365921852,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lt6rm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7838ea9e-895e-43bc-8be4-9f0d98616812,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac471d524493b542fb2b5a7f3d5d454c624dcc13df7c946cac15801b10cce2b0,PodSandboxId:f0de5f43bd64b4cc01a29d401fb8229c95375fb560e5a42fab13352ee972982e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723741795552712802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1f14268-bbdd-4b42-8d28-16db4e873bcd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c36779dc514bd45e38edcb986fa83a4e6587d65d56edbd718ba93bf975c332,PodSandboxId:336e0b99d7ea11132ef6fbfbc9f451460ecc49a82dac7d859bb7342a38ff8e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723741680671397099,Labels:map[string]string{io.kubernetes.container.name: metrics-s
erver,io.kubernetes.pod.name: metrics-server-8988944d9-2rpw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccb0984-23af-4380-b4e7-c266d3917b45,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a08fe240691c0b9be06b8345c98eef027070426d147fe5fd30b808dd98b725e,PodSandboxId:a2282c836ecde27237e2d5e8607ba405392b28ee4bdb446ea8b9c4bfaca33b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Cr
eatedAt:1723741640034562028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a49d08-7c2e-4333-bde2-165983d8812b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7dabbdd78f5269bc7a8bf4704238d900673e48be099540c6f354d1996d7171,PodSandboxId:803fe16e005170063be38728d5ba3d1bc4abbc2ec159fdf8ad2f85626313f447,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723741637306610
897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpjgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9818a08-6d11-41fe-81d9-afed636031df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76f57abbb47d25a09ada3d4c9c62d3c1f077dba5cc3555a0c7ee9cdb80b5afe,PodSandboxId:ae8398b86515edfade40518c0c7bdba9416748d390628c63f6ce07f7a1f6ef2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6
494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723741634311918570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9zjlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ade0f95-ff6d-402e-8491-a63a6c75767c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fea5c45564c68d4b07a79b4775bf842c64dccfbaf01ed850f2a4c7738c6dd9,PodSandboxId:23ad700d62d0a35d36e49350914eaf004bde64691830dc072e8fc764b628cc5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723741623573557719,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e1d7dada6ca365bad346bd612c6c16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3534ecea3b438bf44120ffd8e4e6dc0eefcc0893a5383863ad0ddbd1353953b2,PodSandboxId:4a169f6964f2c8a53a64ce41d999fb5f9aa724f42778013e17331a240c0960d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723741623565674642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7467ad3bb5426d6fd74483911510fb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3128463831d397650135e24543103e466ce2084eee25be91527b03cd11840c97,PodSandboxId:0a7f6e41dc2b21c3b96753c55c9fdc5182e28f54248a748f2dd0662172f33c00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897
f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723741623531134743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800eb95466b0525df544e39951ff83ce,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a83258a356d612266f08b866760653c18e1329481cde8025bb1b49412a4784f,PodSandboxId:ebe45e82f021e2ccdbeff23200e9beb031aa6b5eb7d66a86a7503f778a75e650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b158
06c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723741623473873857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 981e4757113ba2796f2c06755ba75895,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ccaa93fe-bb81-4ea5-bea5-d23f2c601eab name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.895248389Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bfaf0eb1-5d8f-4ec7-9c43-caf66a17d583 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.895481145Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bfaf0eb1-5d8f-4ec7-9c43-caf66a17d583 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.896898344Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca5f8513-f60d-4f12-88b0-96d4fbaace23 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.898490316Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742176898464089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca5f8513-f60d-4f12-88b0-96d4fbaace23 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.898992156Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=497e2268-d276-4bd7-8e39-1141b5d9bd05 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.899060807Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=497e2268-d276-4bd7-8e39-1141b5d9bd05 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:16:16 addons-973562 crio[685]: time="2024-08-15 17:16:16.899392162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1787e89abb0afeee25c502bc3195e1f7f75942feeaa3b35ff1b3d7f52491058,PodSandboxId:7a1b069065ecad7a63f983efc81506ee2ed8fee5b8af6f86592048ffb906c92c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1723742069249252355,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-wzp2w,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2579b064-be76-41aa-8fd9-ea64aefd8eed,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2ca963650cd41c4580baf1d6e5d117b855a65c62a34882224efc66db4d9bca0,PodSandboxId:96582e0891be7fe9f967fb5f630132ed5c3ffc44a13d842c5c1ec1631c1e574d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95,State:CONTAINER_RUNNING,CreatedAt:1723741927898142792,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f83b1404-c3f9-436f-a4fa-c82dd8ac7b90,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a5f3d62a377a2f503bec0f3018f57156716152b2b29b1ea99afcb3e6749e528,PodSandboxId:f9f8a5263d0c516e86b6b84024918cb08097232e67a476f79e9dbea80c14ae57,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9c6bbb2a8c1703a86a390eb9553721fcbf12a31c3d1be73d46f83fdeb72d21b1,State:CONTAINER_RUNNING,CreatedAt:1723741924365921852,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-57fb76fcdb-lt6rm,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.ui
d: 7838ea9e-895e-43bc-8be4-9f0d98616812,},Annotations:map[string]string{io.kubernetes.container.hash: 759a8b54,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac471d524493b542fb2b5a7f3d5d454c624dcc13df7c946cac15801b10cce2b0,PodSandboxId:f0de5f43bd64b4cc01a29d401fb8229c95375fb560e5a42fab13352ee972982e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723741795552712802,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubern
etes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d1f14268-bbdd-4b42-8d28-16db4e873bcd,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:26c36779dc514bd45e38edcb986fa83a4e6587d65d56edbd718ba93bf975c332,PodSandboxId:336e0b99d7ea11132ef6fbfbc9f451460ecc49a82dac7d859bb7342a38ff8e6e,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a24c7c057ec8730aaa152f77366454835a46dc699fcf243698a622788fd48d62,State:CONTAINER_RUNNING,CreatedAt:1723741680671397099,Labels:map[string]string{io.kubernetes.container.name: metrics-s
erver,io.kubernetes.pod.name: metrics-server-8988944d9-2rpw7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ccb0984-23af-4380-b4e7-c266d3917b45,},Annotations:map[string]string{io.kubernetes.container.hash: d1eaa25b,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a08fe240691c0b9be06b8345c98eef027070426d147fe5fd30b808dd98b725e,PodSandboxId:a2282c836ecde27237e2d5e8607ba405392b28ee4bdb446ea8b9c4bfaca33b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,Cr
eatedAt:1723741640034562028,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3a49d08-7c2e-4333-bde2-165983d8812b,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c7dabbdd78f5269bc7a8bf4704238d900673e48be099540c6f354d1996d7171,PodSandboxId:803fe16e005170063be38728d5ba3d1bc4abbc2ec159fdf8ad2f85626313f447,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723741637306610
897,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mpjgp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a9818a08-6d11-41fe-81d9-afed636031df,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b76f57abbb47d25a09ada3d4c9c62d3c1f077dba5cc3555a0c7ee9cdb80b5afe,PodSandboxId:ae8398b86515edfade40518c0c7bdba9416748d390628c63f6ce07f7a1f6ef2a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6
494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723741634311918570,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9zjlq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ade0f95-ff6d-402e-8491-a63a6c75767c,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80fea5c45564c68d4b07a79b4775bf842c64dccfbaf01ed850f2a4c7738c6dd9,PodSandboxId:23ad700d62d0a35d36e49350914eaf004bde64691830dc072e8fc764b628cc5c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723741623573557719,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 10e1d7dada6ca365bad346bd612c6c16,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3534ecea3b438bf44120ffd8e4e6dc0eefcc0893a5383863ad0ddbd1353953b2,PodSandboxId:4a169f6964f2c8a53a64ce41d999fb5f9aa724f42778013e17331a240c0960d9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723741623565674642,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d7467ad3bb5426d6fd74483911510fb,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3128463831d397650135e24543103e466ce2084eee25be91527b03cd11840c97,PodSandboxId:0a7f6e41dc2b21c3b96753c55c9fdc5182e28f54248a748f2dd0662172f33c00,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897
f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723741623531134743,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 800eb95466b0525df544e39951ff83ce,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a83258a356d612266f08b866760653c18e1329481cde8025bb1b49412a4784f,PodSandboxId:ebe45e82f021e2ccdbeff23200e9beb031aa6b5eb7d66a86a7503f778a75e650,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b158
06c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723741623473873857,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-973562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 981e4757113ba2796f2c06755ba75895,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=497e2268-d276-4bd7-8e39-1141b5d9bd05 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b1787e89abb0a       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   7a1b069065eca       hello-world-app-55bf9c44b4-wzp2w
	f2ca963650cd4       docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9                         4 minutes ago        Running             nginx                     0                   96582e0891be7       nginx
	8a5f3d62a377a       ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971                   4 minutes ago        Running             headlamp                  0                   f9f8a5263d0c5       headlamp-57fb76fcdb-lt6rm
	ac471d524493b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago        Running             busybox                   0                   f0de5f43bd64b       busybox
	26c36779dc514       registry.k8s.io/metrics-server/metrics-server@sha256:31f034feb3f16062e93be7c40efc596553c89de172e2e412e588f02382388872   8 minutes ago        Running             metrics-server            0                   336e0b99d7ea1       metrics-server-8988944d9-2rpw7
	7a08fe240691c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        8 minutes ago        Running             storage-provisioner       0                   a2282c836ecde       storage-provisioner
	8c7dabbdd78f5       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                                        8 minutes ago        Running             coredns                   0                   803fe16e00517       coredns-6f6b679f8f-mpjgp
	b76f57abbb47d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                                        9 minutes ago        Running             kube-proxy                0                   ae8398b86515e       kube-proxy-9zjlq
	80fea5c45564c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        9 minutes ago        Running             etcd                      0                   23ad700d62d0a       etcd-addons-973562
	3534ecea3b438       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                                        9 minutes ago        Running             kube-apiserver            0                   4a169f6964f2c       kube-apiserver-addons-973562
	3128463831d39       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                                        9 minutes ago        Running             kube-scheduler            0                   0a7f6e41dc2b2       kube-scheduler-addons-973562
	3a83258a356d6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                                        9 minutes ago        Running             kube-controller-manager   0                   ebe45e82f021e       kube-controller-manager-addons-973562
	
	
	==> coredns [8c7dabbdd78f5269bc7a8bf4704238d900673e48be099540c6f354d1996d7171] <==
	[INFO] 10.244.0.7:53290 - 8087 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000302511s
	[INFO] 10.244.0.7:34213 - 6733 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000114509s
	[INFO] 10.244.0.7:34213 - 27465 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000136584s
	[INFO] 10.244.0.7:44234 - 59051 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000145018s
	[INFO] 10.244.0.7:44234 - 6052 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105852s
	[INFO] 10.244.0.7:37848 - 21755 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000128264s
	[INFO] 10.244.0.7:37848 - 62713 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000189252s
	[INFO] 10.244.0.7:42584 - 64233 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075884s
	[INFO] 10.244.0.7:42584 - 17133 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000043453s
	[INFO] 10.244.0.7:37081 - 7420 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000044415s
	[INFO] 10.244.0.7:37081 - 42995 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000041904s
	[INFO] 10.244.0.7:57302 - 60628 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035068s
	[INFO] 10.244.0.7:57302 - 36566 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000031008s
	[INFO] 10.244.0.7:38674 - 13771 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000028824s
	[INFO] 10.244.0.7:38674 - 18389 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000051632s
	[INFO] 10.244.0.22:50305 - 53594 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000411796s
	[INFO] 10.244.0.22:33601 - 28784 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000082061s
	[INFO] 10.244.0.22:47222 - 45431 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012319s
	[INFO] 10.244.0.22:45471 - 59317 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000055836s
	[INFO] 10.244.0.22:57069 - 53768 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000063639s
	[INFO] 10.244.0.22:57346 - 54554 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000112021s
	[INFO] 10.244.0.22:47088 - 52858 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000838726s
	[INFO] 10.244.0.22:50023 - 43526 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.000480694s
	[INFO] 10.244.0.26:42469 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000348585s
	[INFO] 10.244.0.26:60115 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000196037s
	
	
	==> describe nodes <==
	Name:               addons-973562
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-973562
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=addons-973562
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T17_07_09_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-973562
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:07:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-973562
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:16:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:14:49 +0000   Thu, 15 Aug 2024 17:07:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:14:49 +0000   Thu, 15 Aug 2024 17:07:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:14:49 +0000   Thu, 15 Aug 2024 17:07:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:14:49 +0000   Thu, 15 Aug 2024 17:07:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.200
	  Hostname:    addons-973562
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 cdb6e2a853b14fdda051e6504cd494ec
	  System UUID:                cdb6e2a8-53b1-4fdd-a051-e6504cd494ec
	  Boot ID:                    6b438358-870b-4061-a65c-37cfc5f1b5de
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  default                     hello-world-app-55bf9c44b4-wzp2w         0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m15s
	  headlamp                    headlamp-57fb76fcdb-lt6rm                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  kube-system                 coredns-6f6b679f8f-mpjgp                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     9m3s
	  kube-system                 etcd-addons-973562                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         9m9s
	  kube-system                 kube-apiserver-addons-973562             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-controller-manager-addons-973562    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 kube-proxy-9zjlq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m4s
	  kube-system                 kube-scheduler-addons-973562             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m9s
	  kube-system                 metrics-server-8988944d9-2rpw7           100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         8m58s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 9m2s  kube-proxy       
	  Normal  Starting                 9m9s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m9s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m9s  kubelet          Node addons-973562 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m9s  kubelet          Node addons-973562 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m9s  kubelet          Node addons-973562 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m7s  kubelet          Node addons-973562 status is now: NodeReady
	  Normal  RegisteredNode           9m4s  node-controller  Node addons-973562 event: Registered Node addons-973562 in Controller
	
	
	==> dmesg <==
	[  +0.008138] systemd-fstab-generator[1371]: Ignoring "noauto" option for root device
	[  +5.000153] kauditd_printk_skb: 104 callbacks suppressed
	[  +5.023955] kauditd_printk_skb: 167 callbacks suppressed
	[  +8.582623] kauditd_printk_skb: 53 callbacks suppressed
	[Aug15 17:08] kauditd_printk_skb: 34 callbacks suppressed
	[ +48.881489] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.817830] kauditd_printk_skb: 16 callbacks suppressed
	[Aug15 17:09] kauditd_printk_skb: 83 callbacks suppressed
	[  +7.104704] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.518849] kauditd_printk_skb: 6 callbacks suppressed
	[ +23.459117] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.007597] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.588306] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.187096] kauditd_printk_skb: 47 callbacks suppressed
	[Aug15 17:10] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.077073] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.050695] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.048491] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.232311] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.091737] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.198622] kauditd_printk_skb: 36 callbacks suppressed
	[ +14.369628] kauditd_printk_skb: 41 callbacks suppressed
	[Aug15 17:12] kauditd_printk_skb: 46 callbacks suppressed
	[Aug15 17:14] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.087244] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [80fea5c45564c68d4b07a79b4775bf842c64dccfbaf01ed850f2a4c7738c6dd9] <==
	{"level":"info","ts":"2024-08-15T17:08:18.022011Z","caller":"traceutil/trace.go:171","msg":"trace[659034095] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:957; }","duration":"205.760586ms","start":"2024-08-15T17:08:17.816244Z","end":"2024-08-15T17:08:18.022005Z","steps":["trace[659034095] 'agreement among raft nodes before linearized reading'  (duration: 205.667408ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:08:18.022195Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.341643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:08:18.022233Z","caller":"traceutil/trace.go:171","msg":"trace[778957565] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:957; }","duration":"138.380112ms","start":"2024-08-15T17:08:17.883847Z","end":"2024-08-15T17:08:18.022227Z","steps":["trace[778957565] 'agreement among raft nodes before linearized reading'  (duration: 138.333227ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:09:02.637718Z","caller":"traceutil/trace.go:171","msg":"trace[1479075003] linearizableReadLoop","detail":"{readStateIndex:1171; appliedIndex:1170; }","duration":"222.710195ms","start":"2024-08-15T17:09:02.414979Z","end":"2024-08-15T17:09:02.637689Z","steps":["trace[1479075003] 'read index received'  (duration: 222.542398ms)","trace[1479075003] 'applied index is now lower than readState.Index'  (duration: 167.125µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T17:09:02.637860Z","caller":"traceutil/trace.go:171","msg":"trace[263805521] transaction","detail":"{read_only:false; response_revision:1130; number_of_response:1; }","duration":"321.65567ms","start":"2024-08-15T17:09:02.316172Z","end":"2024-08-15T17:09:02.637828Z","steps":["trace[263805521] 'process raft request'  (duration: 321.397523ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:09:02.637964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.96806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-08-15T17:09:02.637991Z","caller":"traceutil/trace.go:171","msg":"trace[956218439] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1130; }","duration":"223.009994ms","start":"2024-08-15T17:09:02.414975Z","end":"2024-08-15T17:09:02.637985Z","steps":["trace[956218439] 'agreement among raft nodes before linearized reading'  (duration: 222.911213ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:09:02.637998Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T17:09:02.316159Z","time spent":"321.743418ms","remote":"127.0.0.1:52432","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1121 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-15T17:09:02.638131Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.37401ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:09:02.638146Z","caller":"traceutil/trace.go:171","msg":"trace[619905049] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1130; }","duration":"151.390267ms","start":"2024-08-15T17:09:02.486751Z","end":"2024-08-15T17:09:02.638142Z","steps":["trace[619905049] 'agreement among raft nodes before linearized reading'  (duration: 151.362195ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:09:51.407826Z","caller":"traceutil/trace.go:171","msg":"trace[1285894460] linearizableReadLoop","detail":"{readStateIndex:1321; appliedIndex:1320; }","duration":"106.389545ms","start":"2024-08-15T17:09:51.301423Z","end":"2024-08-15T17:09:51.407813Z","steps":["trace[1285894460] 'read index received'  (duration: 105.786286ms)","trace[1285894460] 'applied index is now lower than readState.Index'  (duration: 602.567µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T17:09:51.408285Z","caller":"traceutil/trace.go:171","msg":"trace[219595597] transaction","detail":"{read_only:false; response_revision:1269; number_of_response:1; }","duration":"114.744101ms","start":"2024-08-15T17:09:51.293530Z","end":"2024-08-15T17:09:51.408274Z","steps":["trace[219595597] 'process raft request'  (duration: 113.717188ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:09:51.408816Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.377482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:09:51.408932Z","caller":"traceutil/trace.go:171","msg":"trace[1075537955] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1269; }","duration":"107.504691ms","start":"2024-08-15T17:09:51.301419Z","end":"2024-08-15T17:09:51.408924Z","steps":["trace[1075537955] 'agreement among raft nodes before linearized reading'  (duration: 107.278909ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T17:10:35.502719Z","caller":"traceutil/trace.go:171","msg":"trace[1769090326] linearizableReadLoop","detail":"{readStateIndex:1638; appliedIndex:1637; }","duration":"279.676559ms","start":"2024-08-15T17:10:35.223013Z","end":"2024-08-15T17:10:35.502690Z","steps":["trace[1769090326] 'read index received'  (duration: 279.206377ms)","trace[1769090326] 'applied index is now lower than readState.Index'  (duration: 469.696µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T17:10:35.504699Z","caller":"traceutil/trace.go:171","msg":"trace[5475948] transaction","detail":"{read_only:false; response_revision:1569; number_of_response:1; }","duration":"309.991886ms","start":"2024-08-15T17:10:35.194689Z","end":"2024-08-15T17:10:35.504681Z","steps":["trace[5475948] 'process raft request'  (duration: 307.789053ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:10:35.505675Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"282.631607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-08-15T17:10:35.506459Z","caller":"traceutil/trace.go:171","msg":"trace[1778051888] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1569; }","duration":"283.439651ms","start":"2024-08-15T17:10:35.223010Z","end":"2024-08-15T17:10:35.506449Z","steps":["trace[1778051888] 'agreement among raft nodes before linearized reading'  (duration: 282.567612ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:10:35.507228Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.234927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:10:35.507875Z","caller":"traceutil/trace.go:171","msg":"trace[1556534310] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:1569; }","duration":"267.890328ms","start":"2024-08-15T17:10:35.239974Z","end":"2024-08-15T17:10:35.507864Z","steps":["trace[1556534310] 'agreement among raft nodes before linearized reading'  (duration: 267.147398ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:10:35.505963Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.167192ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:10:35.508616Z","caller":"traceutil/trace.go:171","msg":"trace[1116694190] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1569; }","duration":"282.819995ms","start":"2024-08-15T17:10:35.225788Z","end":"2024-08-15T17:10:35.508608Z","steps":["trace[1116694190] 'agreement among raft nodes before linearized reading'  (duration: 280.154463ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T17:10:35.505857Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T17:10:35.194671Z","time spent":"310.725508ms","remote":"127.0.0.1:52432","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1567 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-15T17:11:10.991020Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.127675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:11:10.991104Z","caller":"traceutil/trace.go:171","msg":"trace[948880056] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1888; }","duration":"107.227148ms","start":"2024-08-15T17:11:10.883866Z","end":"2024-08-15T17:11:10.991093Z","steps":["trace[948880056] 'range keys from in-memory index tree'  (duration: 107.009282ms)"],"step_count":1}
	
	
	==> kernel <==
	 17:16:17 up 9 min,  0 users,  load average: 0.04, 0.62, 0.55
	Linux addons-973562 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [3534ecea3b438bf44120ffd8e4e6dc0eefcc0893a5383863ad0ddbd1353953b2] <==
	I0815 17:09:10.047175       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0815 17:10:03.349103       1 conn.go:339] Error on socket receive: read tcp 192.168.39.200:8443->192.168.39.1:55528: use of closed network connection
	E0815 17:10:03.566517       1 conn.go:339] Error on socket receive: read tcp 192.168.39.200:8443->192.168.39.1:55550: use of closed network connection
	E0815 17:10:39.660151       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0815 17:10:41.305778       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0815 17:10:48.561477       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0815 17:10:49.611612       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0815 17:10:54.654484       1 watch.go:250] "Unhandled Error" err="client disconnected" logger="UnhandledError"
	I0815 17:11:01.247561       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:11:01.247600       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:11:01.279100       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:11:01.279157       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:11:01.309740       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:11:01.310108       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:11:01.326796       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:11:01.326850       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:11:01.476226       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0815 17:11:01.476277       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0815 17:11:01.897488       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.241.9"}
	I0815 17:11:02.090731       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0815 17:11:02.264529       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.111.110"}
	W0815 17:11:02.327442       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0815 17:11:02.476908       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0815 17:11:02.481666       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0815 17:14:26.210087       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.11.254"}
	
	
	==> kube-controller-manager [3a83258a356d612266f08b866760653c18e1329481cde8025bb1b49412a4784f] <==
	W0815 17:14:29.496824       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:14:29.496940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 17:14:29.597059       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="7.366669ms"
	I0815 17:14:29.597486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.087µs"
	W0815 17:14:35.028368       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:14:35.028421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 17:14:38.282599       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0815 17:14:49.307974       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-973562"
	W0815 17:14:52.002591       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:14:52.002669       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:15:05.298460       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:15:05.298591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:15:23.241374       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:15:23.241537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:15:25.735166       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:15:25.735217       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:15:38.238715       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:15:38.238834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:15:51.742203       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:15:51.742255       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:16:10.173142       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:16:10.173367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0815 17:16:11.502858       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0815 17:16:11.502950       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0815 17:16:15.912114       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-8988944d9" duration="7.87µs"
	
	
	==> kube-proxy [b76f57abbb47d25a09ada3d4c9c62d3c1f077dba5cc3555a0c7ee9cdb80b5afe] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 17:07:15.017266       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 17:07:15.029626       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.200"]
	E0815 17:07:15.032460       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:07:15.119956       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 17:07:15.119989       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 17:07:15.120048       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:07:15.131076       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:07:15.131279       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:07:15.131290       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:07:15.133199       1 config.go:197] "Starting service config controller"
	I0815 17:07:15.133208       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:07:15.133223       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:07:15.133226       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:07:15.137158       1 config.go:326] "Starting node config controller"
	I0815 17:07:15.137168       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:07:15.234398       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 17:07:15.234435       1 shared_informer.go:320] Caches are synced for service config
	I0815 17:07:15.237373       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3128463831d397650135e24543103e466ce2084eee25be91527b03cd11840c97] <==
	W0815 17:07:06.176934       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 17:07:06.178714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:06.176982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 17:07:06.178802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:06.177031       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 17:07:06.178855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:06.177073       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 17:07:06.178907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:06.177121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 17:07:06.181578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:06.177168       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 17:07:06.181642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:06.177203       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 17:07:06.181708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:07.119923       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 17:07:07.120023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:07.197757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 17:07:07.197886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:07.209005       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:07:07.209098       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:07.240143       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 17:07:07.240192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:07:07.362515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 17:07:07.362687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 17:07:07.733213       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 17:15:19 addons-973562 kubelet[1222]: E0815 17:15:19.107499    1222 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742119106784333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:15:19 addons-973562 kubelet[1222]: E0815 17:15:19.107587    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742119106784333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:15:29 addons-973562 kubelet[1222]: E0815 17:15:29.110020    1222 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742129109632796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:15:29 addons-973562 kubelet[1222]: E0815 17:15:29.110524    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742129109632796,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:15:31 addons-973562 kubelet[1222]: I0815 17:15:31.778398    1222 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 17:15:39 addons-973562 kubelet[1222]: E0815 17:15:39.113210    1222 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742139112851129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:15:39 addons-973562 kubelet[1222]: E0815 17:15:39.113253    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742139112851129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:15:49 addons-973562 kubelet[1222]: E0815 17:15:49.115351    1222 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742149114936250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:15:49 addons-973562 kubelet[1222]: E0815 17:15:49.115396    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742149114936250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:15:59 addons-973562 kubelet[1222]: E0815 17:15:59.124655    1222 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742159118858783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:15:59 addons-973562 kubelet[1222]: E0815 17:15:59.124717    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742159118858783,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:16:08 addons-973562 kubelet[1222]: E0815 17:16:08.796212    1222 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 17:16:08 addons-973562 kubelet[1222]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 17:16:08 addons-973562 kubelet[1222]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 17:16:08 addons-973562 kubelet[1222]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 17:16:08 addons-973562 kubelet[1222]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 17:16:09 addons-973562 kubelet[1222]: E0815 17:16:09.130094    1222 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742169129333916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:16:09 addons-973562 kubelet[1222]: E0815 17:16:09.130222    1222 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723742169129333916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:590423,},InodesUsed:&UInt64Value{Value:210,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:16:15 addons-973562 kubelet[1222]: I0815 17:16:15.939075    1222 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-wzp2w" podStartSLOduration=107.344382234 podStartE2EDuration="1m49.939051162s" podCreationTimestamp="2024-08-15 17:14:26 +0000 UTC" firstStartedPulling="2024-08-15 17:14:26.624490855 +0000 UTC m=+437.970268333" lastFinishedPulling="2024-08-15 17:14:29.219159784 +0000 UTC m=+440.564937261" observedRunningTime="2024-08-15 17:14:29.588849885 +0000 UTC m=+440.934627384" watchObservedRunningTime="2024-08-15 17:16:15.939051162 +0000 UTC m=+547.284828654"
	Aug 15 17:16:17 addons-973562 kubelet[1222]: I0815 17:16:17.343695    1222 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmj7v\" (UniqueName: \"kubernetes.io/projected/5ccb0984-23af-4380-b4e7-c266d3917b45-kube-api-access-gmj7v\") pod \"5ccb0984-23af-4380-b4e7-c266d3917b45\" (UID: \"5ccb0984-23af-4380-b4e7-c266d3917b45\") "
	Aug 15 17:16:17 addons-973562 kubelet[1222]: I0815 17:16:17.343791    1222 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5ccb0984-23af-4380-b4e7-c266d3917b45-tmp-dir\") pod \"5ccb0984-23af-4380-b4e7-c266d3917b45\" (UID: \"5ccb0984-23af-4380-b4e7-c266d3917b45\") "
	Aug 15 17:16:17 addons-973562 kubelet[1222]: I0815 17:16:17.344411    1222 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ccb0984-23af-4380-b4e7-c266d3917b45-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "5ccb0984-23af-4380-b4e7-c266d3917b45" (UID: "5ccb0984-23af-4380-b4e7-c266d3917b45"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Aug 15 17:16:17 addons-973562 kubelet[1222]: I0815 17:16:17.354937    1222 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ccb0984-23af-4380-b4e7-c266d3917b45-kube-api-access-gmj7v" (OuterVolumeSpecName: "kube-api-access-gmj7v") pod "5ccb0984-23af-4380-b4e7-c266d3917b45" (UID: "5ccb0984-23af-4380-b4e7-c266d3917b45"). InnerVolumeSpecName "kube-api-access-gmj7v". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 17:16:17 addons-973562 kubelet[1222]: I0815 17:16:17.444496    1222 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5ccb0984-23af-4380-b4e7-c266d3917b45-tmp-dir\") on node \"addons-973562\" DevicePath \"\""
	Aug 15 17:16:17 addons-973562 kubelet[1222]: I0815 17:16:17.444524    1222 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gmj7v\" (UniqueName: \"kubernetes.io/projected/5ccb0984-23af-4380-b4e7-c266d3917b45-kube-api-access-gmj7v\") on node \"addons-973562\" DevicePath \"\""
	
	
	==> storage-provisioner [7a08fe240691c0b9be06b8345c98eef027070426d147fe5fd30b808dd98b725e] <==
	I0815 17:07:20.818609       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 17:07:20.844232       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 17:07:20.844384       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 17:07:20.889431       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 17:07:20.890428       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-973562_365c5dec-5ae0-4e58-a19c-7bd73df05d0f!
	I0815 17:07:20.891446       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46b7b630-dbf4-4aa1-a49f-b9ac7c30c938", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-973562_365c5dec-5ae0-4e58-a19c-7bd73df05d0f became leader
	I0815 17:07:21.010510       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-973562_365c5dec-5ae0-4e58-a19c-7bd73df05d0f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-973562 -n addons-973562
helpers_test.go:261: (dbg) Run:  kubectl --context addons-973562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-8988944d9-2rpw7
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-973562 describe pod metrics-server-8988944d9-2rpw7
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-973562 describe pod metrics-server-8988944d9-2rpw7: exit status 1 (73.915467ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8988944d9-2rpw7" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-973562 describe pod metrics-server-8988944d9-2rpw7: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (349.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-973562
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-973562: exit status 82 (2m0.454038306s)

                                                
                                                
-- stdout --
	* Stopping node "addons-973562"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-973562" : exit status 82
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-973562
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-973562: exit status 11 (21.655081169s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-973562" : exit status 11
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-973562
addons_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-973562: exit status 11 (6.142881294s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:184: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-973562" : exit status 11
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-973562
addons_test.go:187: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-973562: exit status 11 (6.143283514s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.200:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:189: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-973562" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image rm kicbase/echo-server:functional-773344 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-773344 image rm kicbase/echo-server:functional-773344 --alsologtostderr: (2.954247894s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image ls
functional_test.go:403: expected "kicbase/echo-server:functional-773344" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (3.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:411: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0815 17:27:59.905504   30562 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:27:59.905673   30562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:27:59.905683   30562 out.go:358] Setting ErrFile to fd 2...
	I0815 17:27:59.905690   30562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:27:59.905901   30562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:27:59.906479   30562 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:27:59.906598   30562 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:27:59.906979   30562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:27:59.907039   30562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:27:59.921655   30562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39797
	I0815 17:27:59.922119   30562 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:27:59.922765   30562 main.go:141] libmachine: Using API Version  1
	I0815 17:27:59.922792   30562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:27:59.923116   30562 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:27:59.923318   30562 main.go:141] libmachine: (functional-773344) Calling .GetState
	I0815 17:27:59.925202   30562 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:27:59.925238   30562 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:27:59.939236   30562 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I0815 17:27:59.939584   30562 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:27:59.940051   30562 main.go:141] libmachine: Using API Version  1
	I0815 17:27:59.940070   30562 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:27:59.940412   30562 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:27:59.940610   30562 main.go:141] libmachine: (functional-773344) Calling .DriverName
	I0815 17:27:59.940892   30562 ssh_runner.go:195] Run: systemctl --version
	I0815 17:27:59.940918   30562 main.go:141] libmachine: (functional-773344) Calling .GetSSHHostname
	I0815 17:27:59.944059   30562 main.go:141] libmachine: (functional-773344) DBG | domain functional-773344 has defined MAC address 52:54:00:ad:cf:88 in network mk-functional-773344
	I0815 17:27:59.944455   30562 main.go:141] libmachine: (functional-773344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:cf:88", ip: ""} in network mk-functional-773344: {Iface:virbr1 ExpiryTime:2024-08-15 18:19:58 +0000 UTC Type:0 Mac:52:54:00:ad:cf:88 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-773344 Clientid:01:52:54:00:ad:cf:88}
	I0815 17:27:59.944479   30562 main.go:141] libmachine: (functional-773344) DBG | domain functional-773344 has defined IP address 192.168.39.182 and MAC address 52:54:00:ad:cf:88 in network mk-functional-773344
	I0815 17:27:59.944550   30562 main.go:141] libmachine: (functional-773344) Calling .GetSSHPort
	I0815 17:27:59.944715   30562 main.go:141] libmachine: (functional-773344) Calling .GetSSHKeyPath
	I0815 17:27:59.944803   30562 main.go:141] libmachine: (functional-773344) Calling .GetSSHUsername
	I0815 17:27:59.944952   30562 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/functional-773344/id_rsa Username:docker}
	I0815 17:28:00.023522   30562 cache_images.go:289] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar
	I0815 17:28:00.023635   30562 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/echo-server-save.tar
	I0815 17:28:00.028111   30562 ssh_runner.go:362] scp /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --> /var/lib/minikube/images/echo-server-save.tar (4950016 bytes)
	I0815 17:28:00.166257   30562 crio.go:275] Loading image: /var/lib/minikube/images/echo-server-save.tar
	I0815 17:28:00.166355   30562 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/echo-server-save.tar
	W0815 17:28:00.523379   30562 cache_images.go:253] Failed to load cached images for "functional-773344": loading images: CRI-O load /var/lib/minikube/images/echo-server-save.tar: crio load image: sudo podman load -i /var/lib/minikube/images/echo-server-save.tar: Process exited with status 125
	stdout:
	
	stderr:
	Getting image source signatures
	Error: payload does not match any of the supported image formats (oci, oci-archive, dir, docker-archive)
	I0815 17:28:00.523409   30562 cache_images.go:265] failed pushing to: functional-773344
	I0815 17:28:00.523445   30562 main.go:141] libmachine: Making call to close driver server
	I0815 17:28:00.523454   30562 main.go:141] libmachine: (functional-773344) Calling .Close
	I0815 17:28:00.523722   30562 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:28:00.523742   30562 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:28:00.523751   30562 main.go:141] libmachine: Making call to close driver server
	I0815 17:28:00.523762   30562 main.go:141] libmachine: (functional-773344) Calling .Close
	I0815 17:28:00.524020   30562 main.go:141] libmachine: (functional-773344) DBG | Closing plugin on server side
	I0815 17:28:00.524036   30562 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:28:00.524067   30562 main.go:141] libmachine: Making call to close connection to plugin binary

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-773344
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image save --daemon kicbase/echo-server:functional-773344 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-773344 image save --daemon kicbase/echo-server:functional-773344 --alsologtostderr: (4.555687794s)
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-773344
functional_test.go:432: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-773344: exit status 1 (15.881593ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-773344

                                                
                                                
** /stderr **
functional_test.go:434: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-773344

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (4.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 node stop m02 -v=7 --alsologtostderr
E0815 17:34:09.673315   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:34:52.218227   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:35:31.595078   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683878 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.462361328s)

                                                
                                                
-- stdout --
	* Stopping node "ha-683878-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:34:06.071649   36637 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:34:06.071957   36637 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:34:06.071972   36637 out.go:358] Setting ErrFile to fd 2...
	I0815 17:34:06.071978   36637 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:34:06.072158   36637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:34:06.072394   36637 mustload.go:65] Loading cluster: ha-683878
	I0815 17:34:06.072834   36637 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:34:06.072849   36637 stop.go:39] StopHost: ha-683878-m02
	I0815 17:34:06.073248   36637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:34:06.073283   36637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:34:06.088374   36637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34821
	I0815 17:34:06.088836   36637 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:34:06.089346   36637 main.go:141] libmachine: Using API Version  1
	I0815 17:34:06.089371   36637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:34:06.089742   36637 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:34:06.091732   36637 out.go:177] * Stopping node "ha-683878-m02"  ...
	I0815 17:34:06.092967   36637 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 17:34:06.093004   36637 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:34:06.093258   36637 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 17:34:06.093294   36637 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:34:06.095919   36637 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:34:06.096314   36637 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:34:06.096346   36637 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:34:06.096473   36637 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:34:06.096645   36637 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:34:06.096786   36637 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:34:06.096953   36637 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	I0815 17:34:06.179471   36637 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 17:34:06.239468   36637 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 17:34:06.295194   36637 main.go:141] libmachine: Stopping "ha-683878-m02"...
	I0815 17:34:06.295218   36637 main.go:141] libmachine: (ha-683878-m02) Calling .GetState
	I0815 17:34:06.296795   36637 main.go:141] libmachine: (ha-683878-m02) Calling .Stop
	I0815 17:34:06.300262   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 0/120
	I0815 17:34:07.302249   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 1/120
	I0815 17:34:08.304377   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 2/120
	I0815 17:34:09.305752   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 3/120
	I0815 17:34:10.307770   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 4/120
	I0815 17:34:11.309726   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 5/120
	I0815 17:34:12.311463   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 6/120
	I0815 17:34:13.312723   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 7/120
	I0815 17:34:14.315016   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 8/120
	I0815 17:34:15.316288   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 9/120
	I0815 17:34:16.318593   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 10/120
	I0815 17:34:17.320128   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 11/120
	I0815 17:34:18.321454   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 12/120
	I0815 17:34:19.322878   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 13/120
	I0815 17:34:20.324116   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 14/120
	I0815 17:34:21.325742   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 15/120
	I0815 17:34:22.327176   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 16/120
	I0815 17:34:23.328438   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 17/120
	I0815 17:34:24.329815   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 18/120
	I0815 17:34:25.330959   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 19/120
	I0815 17:34:26.332995   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 20/120
	I0815 17:34:27.334642   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 21/120
	I0815 17:34:28.335925   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 22/120
	I0815 17:34:29.337407   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 23/120
	I0815 17:34:30.338968   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 24/120
	I0815 17:34:31.340505   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 25/120
	I0815 17:34:32.341926   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 26/120
	I0815 17:34:33.343883   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 27/120
	I0815 17:34:34.345180   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 28/120
	I0815 17:34:35.346990   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 29/120
	I0815 17:34:36.348740   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 30/120
	I0815 17:34:37.351068   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 31/120
	I0815 17:34:38.352540   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 32/120
	I0815 17:34:39.354267   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 33/120
	I0815 17:34:40.355669   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 34/120
	I0815 17:34:41.357603   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 35/120
	I0815 17:34:42.359054   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 36/120
	I0815 17:34:43.360415   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 37/120
	I0815 17:34:44.361686   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 38/120
	I0815 17:34:45.363043   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 39/120
	I0815 17:34:46.365144   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 40/120
	I0815 17:34:47.366340   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 41/120
	I0815 17:34:48.367814   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 42/120
	I0815 17:34:49.369283   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 43/120
	I0815 17:34:50.370833   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 44/120
	I0815 17:34:51.373023   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 45/120
	I0815 17:34:52.375080   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 46/120
	I0815 17:34:53.376433   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 47/120
	I0815 17:34:54.377823   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 48/120
	I0815 17:34:55.379201   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 49/120
	I0815 17:34:56.380455   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 50/120
	I0815 17:34:57.381859   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 51/120
	I0815 17:34:58.383323   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 52/120
	I0815 17:34:59.384734   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 53/120
	I0815 17:35:00.386961   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 54/120
	I0815 17:35:01.389029   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 55/120
	I0815 17:35:02.390968   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 56/120
	I0815 17:35:03.392192   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 57/120
	I0815 17:35:04.393508   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 58/120
	I0815 17:35:05.394754   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 59/120
	I0815 17:35:06.396891   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 60/120
	I0815 17:35:07.399201   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 61/120
	I0815 17:35:08.400611   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 62/120
	I0815 17:35:09.401858   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 63/120
	I0815 17:35:10.403834   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 64/120
	I0815 17:35:11.405079   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 65/120
	I0815 17:35:12.407290   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 66/120
	I0815 17:35:13.408809   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 67/120
	I0815 17:35:14.411022   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 68/120
	I0815 17:35:15.412365   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 69/120
	I0815 17:35:16.414298   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 70/120
	I0815 17:35:17.415635   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 71/120
	I0815 17:35:18.416872   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 72/120
	I0815 17:35:19.418948   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 73/120
	I0815 17:35:20.420354   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 74/120
	I0815 17:35:21.422144   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 75/120
	I0815 17:35:22.423458   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 76/120
	I0815 17:35:23.424951   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 77/120
	I0815 17:35:24.427119   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 78/120
	I0815 17:35:25.428302   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 79/120
	I0815 17:35:26.430321   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 80/120
	I0815 17:35:27.431866   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 81/120
	I0815 17:35:28.433332   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 82/120
	I0815 17:35:29.434752   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 83/120
	I0815 17:35:30.436280   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 84/120
	I0815 17:35:31.437611   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 85/120
	I0815 17:35:32.438978   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 86/120
	I0815 17:35:33.440485   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 87/120
	I0815 17:35:34.442610   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 88/120
	I0815 17:35:35.444004   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 89/120
	I0815 17:35:36.446042   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 90/120
	I0815 17:35:37.447877   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 91/120
	I0815 17:35:38.449277   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 92/120
	I0815 17:35:39.450940   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 93/120
	I0815 17:35:40.452222   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 94/120
	I0815 17:35:41.453995   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 95/120
	I0815 17:35:42.455607   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 96/120
	I0815 17:35:43.456894   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 97/120
	I0815 17:35:44.459207   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 98/120
	I0815 17:35:45.460340   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 99/120
	I0815 17:35:46.462229   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 100/120
	I0815 17:35:47.463587   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 101/120
	I0815 17:35:48.465251   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 102/120
	I0815 17:35:49.466693   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 103/120
	I0815 17:35:50.468026   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 104/120
	I0815 17:35:51.469963   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 105/120
	I0815 17:35:52.471393   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 106/120
	I0815 17:35:53.472641   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 107/120
	I0815 17:35:54.475120   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 108/120
	I0815 17:35:55.476242   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 109/120
	I0815 17:35:56.478291   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 110/120
	I0815 17:35:57.479482   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 111/120
	I0815 17:35:58.481371   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 112/120
	I0815 17:35:59.482791   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 113/120
	I0815 17:36:00.483996   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 114/120
	I0815 17:36:01.485929   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 115/120
	I0815 17:36:02.487264   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 116/120
	I0815 17:36:03.488399   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 117/120
	I0815 17:36:04.489817   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 118/120
	I0815 17:36:05.491359   36637 main.go:141] libmachine: (ha-683878-m02) Waiting for machine to stop 119/120
	I0815 17:36:06.492430   36637 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 17:36:06.492578   36637 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-683878 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
E0815 17:36:15.288658   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr: exit status 3 (19.051961992s)

                                                
                                                
-- stdout --
	ha-683878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-683878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:36:06.535960   37051 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:36:06.536067   37051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:06.536076   37051 out.go:358] Setting ErrFile to fd 2...
	I0815 17:36:06.536080   37051 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:06.536240   37051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:36:06.536398   37051 out.go:352] Setting JSON to false
	I0815 17:36:06.536422   37051 mustload.go:65] Loading cluster: ha-683878
	I0815 17:36:06.536541   37051 notify.go:220] Checking for updates...
	I0815 17:36:06.536801   37051 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:36:06.536817   37051 status.go:255] checking status of ha-683878 ...
	I0815 17:36:06.537186   37051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:06.537239   37051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:06.552804   37051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44945
	I0815 17:36:06.553183   37051 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:06.553673   37051 main.go:141] libmachine: Using API Version  1
	I0815 17:36:06.553693   37051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:06.554061   37051 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:06.554292   37051 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:36:06.555736   37051 status.go:330] ha-683878 host status = "Running" (err=<nil>)
	I0815 17:36:06.555751   37051 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:36:06.556028   37051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:06.556077   37051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:06.570537   37051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43265
	I0815 17:36:06.570915   37051 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:06.571451   37051 main.go:141] libmachine: Using API Version  1
	I0815 17:36:06.571485   37051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:06.571865   37051 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:06.572061   37051 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:36:06.574832   37051 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:06.575236   37051 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:36:06.575262   37051 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:06.575419   37051 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:36:06.575801   37051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:06.575844   37051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:06.590683   37051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39493
	I0815 17:36:06.591059   37051 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:06.591489   37051 main.go:141] libmachine: Using API Version  1
	I0815 17:36:06.591519   37051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:06.591801   37051 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:06.591954   37051 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:36:06.592158   37051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:06.592192   37051 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:36:06.594943   37051 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:06.595425   37051 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:36:06.595449   37051 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:06.595589   37051 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:36:06.595744   37051 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:36:06.595890   37051 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:36:06.596022   37051 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:36:06.682887   37051 ssh_runner.go:195] Run: systemctl --version
	I0815 17:36:06.690297   37051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:06.706444   37051 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:36:06.706476   37051 api_server.go:166] Checking apiserver status ...
	I0815 17:36:06.706514   37051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:36:06.728413   37051 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup
	W0815 17:36:06.739104   37051 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:36:06.739173   37051 ssh_runner.go:195] Run: ls
	I0815 17:36:06.743848   37051 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:36:06.748096   37051 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:36:06.748116   37051 status.go:422] ha-683878 apiserver status = Running (err=<nil>)
	I0815 17:36:06.748128   37051 status.go:257] ha-683878 status: &{Name:ha-683878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:36:06.748149   37051 status.go:255] checking status of ha-683878-m02 ...
	I0815 17:36:06.748564   37051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:06.748617   37051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:06.762936   37051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43439
	I0815 17:36:06.763386   37051 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:06.763919   37051 main.go:141] libmachine: Using API Version  1
	I0815 17:36:06.763939   37051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:06.764247   37051 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:06.764404   37051 main.go:141] libmachine: (ha-683878-m02) Calling .GetState
	I0815 17:36:06.766136   37051 status.go:330] ha-683878-m02 host status = "Running" (err=<nil>)
	I0815 17:36:06.766152   37051 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:36:06.766477   37051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:06.766514   37051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:06.781137   37051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41375
	I0815 17:36:06.781514   37051 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:06.781907   37051 main.go:141] libmachine: Using API Version  1
	I0815 17:36:06.781932   37051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:06.782284   37051 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:06.782477   37051 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:36:06.785304   37051 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:06.785698   37051 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:36:06.785722   37051 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:06.785868   37051 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:36:06.786210   37051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:06.786240   37051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:06.800075   37051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46069
	I0815 17:36:06.800475   37051 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:06.800886   37051 main.go:141] libmachine: Using API Version  1
	I0815 17:36:06.800914   37051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:06.801240   37051 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:06.801500   37051 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:36:06.801732   37051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:06.801755   37051 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:36:06.804846   37051 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:06.805249   37051 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:36:06.805273   37051 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:06.805411   37051 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:36:06.805589   37051 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:36:06.805714   37051 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:36:06.805827   37051 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	W0815 17:36:25.168698   37051 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.232:22: connect: no route to host
	W0815 17:36:25.168807   37051 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	E0815 17:36:25.168823   37051 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:36:25.168830   37051 status.go:257] ha-683878-m02 status: &{Name:ha-683878-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 17:36:25.168865   37051 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:36:25.168873   37051 status.go:255] checking status of ha-683878-m03 ...
	I0815 17:36:25.169190   37051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:25.169271   37051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:25.183702   37051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33205
	I0815 17:36:25.184095   37051 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:25.184600   37051 main.go:141] libmachine: Using API Version  1
	I0815 17:36:25.184625   37051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:25.184919   37051 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:25.185146   37051 main.go:141] libmachine: (ha-683878-m03) Calling .GetState
	I0815 17:36:25.186594   37051 status.go:330] ha-683878-m03 host status = "Running" (err=<nil>)
	I0815 17:36:25.186611   37051 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:36:25.186905   37051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:25.186952   37051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:25.205185   37051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44627
	I0815 17:36:25.205637   37051 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:25.206190   37051 main.go:141] libmachine: Using API Version  1
	I0815 17:36:25.206216   37051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:25.206523   37051 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:25.206722   37051 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:36:25.209304   37051 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:25.209708   37051 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:36:25.209737   37051 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:25.209844   37051 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:36:25.210193   37051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:25.210234   37051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:25.225557   37051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0815 17:36:25.225995   37051 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:25.226461   37051 main.go:141] libmachine: Using API Version  1
	I0815 17:36:25.226495   37051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:25.226786   37051 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:25.226958   37051 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:36:25.227122   37051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:25.227143   37051 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:36:25.230109   37051 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:25.230554   37051 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:36:25.230598   37051 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:25.230775   37051 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:36:25.230936   37051 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:36:25.231124   37051 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:36:25.231238   37051 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:36:25.314678   37051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:25.336756   37051 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:36:25.336786   37051 api_server.go:166] Checking apiserver status ...
	I0815 17:36:25.336825   37051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:36:25.355762   37051 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup
	W0815 17:36:25.373633   37051 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:36:25.373691   37051 ssh_runner.go:195] Run: ls
	I0815 17:36:25.378677   37051 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:36:25.384947   37051 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:36:25.384975   37051 status.go:422] ha-683878-m03 apiserver status = Running (err=<nil>)
	I0815 17:36:25.384987   37051 status.go:257] ha-683878-m03 status: &{Name:ha-683878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:36:25.385006   37051 status.go:255] checking status of ha-683878-m04 ...
	I0815 17:36:25.385387   37051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:25.385423   37051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:25.399836   37051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42551
	I0815 17:36:25.400231   37051 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:25.400688   37051 main.go:141] libmachine: Using API Version  1
	I0815 17:36:25.400710   37051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:25.401015   37051 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:25.401215   37051 main.go:141] libmachine: (ha-683878-m04) Calling .GetState
	I0815 17:36:25.402673   37051 status.go:330] ha-683878-m04 host status = "Running" (err=<nil>)
	I0815 17:36:25.402693   37051 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:36:25.402991   37051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:25.403029   37051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:25.417016   37051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34933
	I0815 17:36:25.417399   37051 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:25.417891   37051 main.go:141] libmachine: Using API Version  1
	I0815 17:36:25.417914   37051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:25.418198   37051 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:25.418388   37051 main.go:141] libmachine: (ha-683878-m04) Calling .GetIP
	I0815 17:36:25.421216   37051 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:25.421751   37051 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:36:25.421775   37051 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:25.421924   37051 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:36:25.422385   37051 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:25.422434   37051 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:25.436749   37051 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41827
	I0815 17:36:25.437142   37051 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:25.437587   37051 main.go:141] libmachine: Using API Version  1
	I0815 17:36:25.437607   37051 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:25.437934   37051 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:25.438124   37051 main.go:141] libmachine: (ha-683878-m04) Calling .DriverName
	I0815 17:36:25.438344   37051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:25.438361   37051 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHHostname
	I0815 17:36:25.440935   37051 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:25.441345   37051 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:36:25.441364   37051 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:25.441519   37051 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHPort
	I0815 17:36:25.441671   37051 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHKeyPath
	I0815 17:36:25.441807   37051 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHUsername
	I0815 17:36:25.441921   37051 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m04/id_rsa Username:docker}
	I0815 17:36:25.525729   37051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:25.544063   37051 status.go:257] ha-683878-m04 status: &{Name:ha-683878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-683878 -n ha-683878
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-683878 logs -n 25: (1.41159768s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3030958127/001/cp-test_ha-683878-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878:/home/docker/cp-test_ha-683878-m03_ha-683878.txt                       |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878 sudo cat                                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m03_ha-683878.txt                                 |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m02:/home/docker/cp-test_ha-683878-m03_ha-683878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m02 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m03_ha-683878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04:/home/docker/cp-test_ha-683878-m03_ha-683878-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m04 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m03_ha-683878-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-683878 cp testdata/cp-test.txt                                                | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3030958127/001/cp-test_ha-683878-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878:/home/docker/cp-test_ha-683878-m04_ha-683878.txt                       |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878 sudo cat                                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m04_ha-683878.txt                                 |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m02:/home/docker/cp-test_ha-683878-m04_ha-683878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m02 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m04_ha-683878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03:/home/docker/cp-test_ha-683878-m04_ha-683878-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m03 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m04_ha-683878-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-683878 node stop m02 -v=7                                                     | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:28:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:28:34.800374   32399 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:28:34.800479   32399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:28:34.800504   32399 out.go:358] Setting ErrFile to fd 2...
	I0815 17:28:34.800512   32399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:28:34.800695   32399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:28:34.801271   32399 out.go:352] Setting JSON to false
	I0815 17:28:34.802107   32399 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4261,"bootTime":1723738654,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:28:34.802164   32399 start.go:139] virtualization: kvm guest
	I0815 17:28:34.804236   32399 out.go:177] * [ha-683878] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:28:34.805491   32399 notify.go:220] Checking for updates...
	I0815 17:28:34.805523   32399 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:28:34.806921   32399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:28:34.808443   32399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:28:34.809727   32399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:28:34.810839   32399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:28:34.811973   32399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:28:34.813220   32399 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:28:34.849062   32399 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 17:28:34.850087   32399 start.go:297] selected driver: kvm2
	I0815 17:28:34.850100   32399 start.go:901] validating driver "kvm2" against <nil>
	I0815 17:28:34.850111   32399 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:28:34.850761   32399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:28:34.850838   32399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 17:28:34.865056   32399 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 17:28:34.865108   32399 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:28:34.865309   32399 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:28:34.865370   32399 cni.go:84] Creating CNI manager for ""
	I0815 17:28:34.865382   32399 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0815 17:28:34.865390   32399 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 17:28:34.865439   32399 start.go:340] cluster config:
	{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0815 17:28:34.865525   32399 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:28:34.867162   32399 out.go:177] * Starting "ha-683878" primary control-plane node in "ha-683878" cluster
	I0815 17:28:34.868155   32399 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:28:34.868196   32399 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:28:34.868209   32399 cache.go:56] Caching tarball of preloaded images
	I0815 17:28:34.868281   32399 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:28:34.868295   32399 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:28:34.868647   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:28:34.868671   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json: {Name:mk42d1859c56aeb2f4ea506a56543ef14b895257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:28:34.868838   32399 start.go:360] acquireMachinesLock for ha-683878: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:28:34.868878   32399 start.go:364] duration metric: took 24.715µs to acquireMachinesLock for "ha-683878"
	I0815 17:28:34.868902   32399 start.go:93] Provisioning new machine with config: &{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:28:34.868992   32399 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 17:28:34.870549   32399 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:28:34.870682   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:28:34.870724   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:28:34.884647   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41983
	I0815 17:28:34.885062   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:28:34.885643   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:28:34.885667   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:28:34.885948   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:28:34.886145   32399 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:28:34.886300   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:34.886445   32399 start.go:159] libmachine.API.Create for "ha-683878" (driver="kvm2")
	I0815 17:28:34.886482   32399 client.go:168] LocalClient.Create starting
	I0815 17:28:34.886520   32399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem
	I0815 17:28:34.886562   32399 main.go:141] libmachine: Decoding PEM data...
	I0815 17:28:34.886575   32399 main.go:141] libmachine: Parsing certificate...
	I0815 17:28:34.886628   32399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem
	I0815 17:28:34.886646   32399 main.go:141] libmachine: Decoding PEM data...
	I0815 17:28:34.886657   32399 main.go:141] libmachine: Parsing certificate...
	I0815 17:28:34.886678   32399 main.go:141] libmachine: Running pre-create checks...
	I0815 17:28:34.886685   32399 main.go:141] libmachine: (ha-683878) Calling .PreCreateCheck
	I0815 17:28:34.886994   32399 main.go:141] libmachine: (ha-683878) Calling .GetConfigRaw
	I0815 17:28:34.887356   32399 main.go:141] libmachine: Creating machine...
	I0815 17:28:34.887372   32399 main.go:141] libmachine: (ha-683878) Calling .Create
	I0815 17:28:34.887511   32399 main.go:141] libmachine: (ha-683878) Creating KVM machine...
	I0815 17:28:34.888649   32399 main.go:141] libmachine: (ha-683878) DBG | found existing default KVM network
	I0815 17:28:34.889478   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:34.889336   32422 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0815 17:28:34.889535   32399 main.go:141] libmachine: (ha-683878) DBG | created network xml: 
	I0815 17:28:34.889551   32399 main.go:141] libmachine: (ha-683878) DBG | <network>
	I0815 17:28:34.889561   32399 main.go:141] libmachine: (ha-683878) DBG |   <name>mk-ha-683878</name>
	I0815 17:28:34.889575   32399 main.go:141] libmachine: (ha-683878) DBG |   <dns enable='no'/>
	I0815 17:28:34.889587   32399 main.go:141] libmachine: (ha-683878) DBG |   
	I0815 17:28:34.889603   32399 main.go:141] libmachine: (ha-683878) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 17:28:34.889615   32399 main.go:141] libmachine: (ha-683878) DBG |     <dhcp>
	I0815 17:28:34.889623   32399 main.go:141] libmachine: (ha-683878) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 17:28:34.889633   32399 main.go:141] libmachine: (ha-683878) DBG |     </dhcp>
	I0815 17:28:34.889648   32399 main.go:141] libmachine: (ha-683878) DBG |   </ip>
	I0815 17:28:34.889660   32399 main.go:141] libmachine: (ha-683878) DBG |   
	I0815 17:28:34.889674   32399 main.go:141] libmachine: (ha-683878) DBG | </network>
	I0815 17:28:34.889687   32399 main.go:141] libmachine: (ha-683878) DBG | 
	I0815 17:28:34.894456   32399 main.go:141] libmachine: (ha-683878) DBG | trying to create private KVM network mk-ha-683878 192.168.39.0/24...
	I0815 17:28:34.954565   32399 main.go:141] libmachine: (ha-683878) DBG | private KVM network mk-ha-683878 192.168.39.0/24 created
	I0815 17:28:34.954594   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:34.954547   32422 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:28:34.954606   32399 main.go:141] libmachine: (ha-683878) Setting up store path in /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878 ...
	I0815 17:28:34.954623   32399 main.go:141] libmachine: (ha-683878) Building disk image from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 17:28:34.954688   32399 main.go:141] libmachine: (ha-683878) Downloading /home/jenkins/minikube-integration/19450-13013/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 17:28:35.191456   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:35.191322   32422 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa...
	I0815 17:28:35.362236   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:35.362134   32422 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/ha-683878.rawdisk...
	I0815 17:28:35.362262   32399 main.go:141] libmachine: (ha-683878) DBG | Writing magic tar header
	I0815 17:28:35.362271   32399 main.go:141] libmachine: (ha-683878) DBG | Writing SSH key tar header
	I0815 17:28:35.362281   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:35.362253   32422 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878 ...
	I0815 17:28:35.362386   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878
	I0815 17:28:35.362412   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines
	I0815 17:28:35.362419   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:28:35.362431   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013
	I0815 17:28:35.362442   32399 main.go:141] libmachine: (ha-683878) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878 (perms=drwx------)
	I0815 17:28:35.362475   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 17:28:35.362486   32399 main.go:141] libmachine: (ha-683878) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines (perms=drwxr-xr-x)
	I0815 17:28:35.362494   32399 main.go:141] libmachine: (ha-683878) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube (perms=drwxr-xr-x)
	I0815 17:28:35.362503   32399 main.go:141] libmachine: (ha-683878) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013 (perms=drwxrwxr-x)
	I0815 17:28:35.362513   32399 main.go:141] libmachine: (ha-683878) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 17:28:35.362525   32399 main.go:141] libmachine: (ha-683878) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 17:28:35.362537   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home/jenkins
	I0815 17:28:35.362547   32399 main.go:141] libmachine: (ha-683878) Creating domain...
	I0815 17:28:35.362555   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home
	I0815 17:28:35.362569   32399 main.go:141] libmachine: (ha-683878) DBG | Skipping /home - not owner
	I0815 17:28:35.363677   32399 main.go:141] libmachine: (ha-683878) define libvirt domain using xml: 
	I0815 17:28:35.363695   32399 main.go:141] libmachine: (ha-683878) <domain type='kvm'>
	I0815 17:28:35.363702   32399 main.go:141] libmachine: (ha-683878)   <name>ha-683878</name>
	I0815 17:28:35.363709   32399 main.go:141] libmachine: (ha-683878)   <memory unit='MiB'>2200</memory>
	I0815 17:28:35.363715   32399 main.go:141] libmachine: (ha-683878)   <vcpu>2</vcpu>
	I0815 17:28:35.363722   32399 main.go:141] libmachine: (ha-683878)   <features>
	I0815 17:28:35.363727   32399 main.go:141] libmachine: (ha-683878)     <acpi/>
	I0815 17:28:35.363732   32399 main.go:141] libmachine: (ha-683878)     <apic/>
	I0815 17:28:35.363739   32399 main.go:141] libmachine: (ha-683878)     <pae/>
	I0815 17:28:35.363750   32399 main.go:141] libmachine: (ha-683878)     
	I0815 17:28:35.363759   32399 main.go:141] libmachine: (ha-683878)   </features>
	I0815 17:28:35.363769   32399 main.go:141] libmachine: (ha-683878)   <cpu mode='host-passthrough'>
	I0815 17:28:35.363775   32399 main.go:141] libmachine: (ha-683878)   
	I0815 17:28:35.363778   32399 main.go:141] libmachine: (ha-683878)   </cpu>
	I0815 17:28:35.363785   32399 main.go:141] libmachine: (ha-683878)   <os>
	I0815 17:28:35.363795   32399 main.go:141] libmachine: (ha-683878)     <type>hvm</type>
	I0815 17:28:35.363800   32399 main.go:141] libmachine: (ha-683878)     <boot dev='cdrom'/>
	I0815 17:28:35.363807   32399 main.go:141] libmachine: (ha-683878)     <boot dev='hd'/>
	I0815 17:28:35.363812   32399 main.go:141] libmachine: (ha-683878)     <bootmenu enable='no'/>
	I0815 17:28:35.363816   32399 main.go:141] libmachine: (ha-683878)   </os>
	I0815 17:28:35.363823   32399 main.go:141] libmachine: (ha-683878)   <devices>
	I0815 17:28:35.363834   32399 main.go:141] libmachine: (ha-683878)     <disk type='file' device='cdrom'>
	I0815 17:28:35.363854   32399 main.go:141] libmachine: (ha-683878)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/boot2docker.iso'/>
	I0815 17:28:35.363867   32399 main.go:141] libmachine: (ha-683878)       <target dev='hdc' bus='scsi'/>
	I0815 17:28:35.363874   32399 main.go:141] libmachine: (ha-683878)       <readonly/>
	I0815 17:28:35.363878   32399 main.go:141] libmachine: (ha-683878)     </disk>
	I0815 17:28:35.363886   32399 main.go:141] libmachine: (ha-683878)     <disk type='file' device='disk'>
	I0815 17:28:35.363892   32399 main.go:141] libmachine: (ha-683878)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 17:28:35.363902   32399 main.go:141] libmachine: (ha-683878)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/ha-683878.rawdisk'/>
	I0815 17:28:35.363909   32399 main.go:141] libmachine: (ha-683878)       <target dev='hda' bus='virtio'/>
	I0815 17:28:35.363920   32399 main.go:141] libmachine: (ha-683878)     </disk>
	I0815 17:28:35.363929   32399 main.go:141] libmachine: (ha-683878)     <interface type='network'>
	I0815 17:28:35.363940   32399 main.go:141] libmachine: (ha-683878)       <source network='mk-ha-683878'/>
	I0815 17:28:35.363957   32399 main.go:141] libmachine: (ha-683878)       <model type='virtio'/>
	I0815 17:28:35.363970   32399 main.go:141] libmachine: (ha-683878)     </interface>
	I0815 17:28:35.363979   32399 main.go:141] libmachine: (ha-683878)     <interface type='network'>
	I0815 17:28:35.363991   32399 main.go:141] libmachine: (ha-683878)       <source network='default'/>
	I0815 17:28:35.364003   32399 main.go:141] libmachine: (ha-683878)       <model type='virtio'/>
	I0815 17:28:35.364011   32399 main.go:141] libmachine: (ha-683878)     </interface>
	I0815 17:28:35.364023   32399 main.go:141] libmachine: (ha-683878)     <serial type='pty'>
	I0815 17:28:35.364033   32399 main.go:141] libmachine: (ha-683878)       <target port='0'/>
	I0815 17:28:35.364041   32399 main.go:141] libmachine: (ha-683878)     </serial>
	I0815 17:28:35.364050   32399 main.go:141] libmachine: (ha-683878)     <console type='pty'>
	I0815 17:28:35.364060   32399 main.go:141] libmachine: (ha-683878)       <target type='serial' port='0'/>
	I0815 17:28:35.364076   32399 main.go:141] libmachine: (ha-683878)     </console>
	I0815 17:28:35.364106   32399 main.go:141] libmachine: (ha-683878)     <rng model='virtio'>
	I0815 17:28:35.364131   32399 main.go:141] libmachine: (ha-683878)       <backend model='random'>/dev/random</backend>
	I0815 17:28:35.364140   32399 main.go:141] libmachine: (ha-683878)     </rng>
	I0815 17:28:35.364151   32399 main.go:141] libmachine: (ha-683878)     
	I0815 17:28:35.364162   32399 main.go:141] libmachine: (ha-683878)     
	I0815 17:28:35.364173   32399 main.go:141] libmachine: (ha-683878)   </devices>
	I0815 17:28:35.364182   32399 main.go:141] libmachine: (ha-683878) </domain>
	I0815 17:28:35.364197   32399 main.go:141] libmachine: (ha-683878) 
	I0815 17:28:35.368264   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:41:65:82 in network default
	I0815 17:28:35.368736   32399 main.go:141] libmachine: (ha-683878) Ensuring networks are active...
	I0815 17:28:35.368759   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:35.369370   32399 main.go:141] libmachine: (ha-683878) Ensuring network default is active
	I0815 17:28:35.369656   32399 main.go:141] libmachine: (ha-683878) Ensuring network mk-ha-683878 is active
	I0815 17:28:35.370074   32399 main.go:141] libmachine: (ha-683878) Getting domain xml...
	I0815 17:28:35.370689   32399 main.go:141] libmachine: (ha-683878) Creating domain...
	I0815 17:28:36.535163   32399 main.go:141] libmachine: (ha-683878) Waiting to get IP...
	I0815 17:28:36.535871   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:36.536220   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:36.536261   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:36.536212   32422 retry.go:31] will retry after 215.159557ms: waiting for machine to come up
	I0815 17:28:36.752670   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:36.753187   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:36.753215   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:36.753136   32422 retry.go:31] will retry after 278.070607ms: waiting for machine to come up
	I0815 17:28:37.032729   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:37.033223   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:37.033252   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:37.033186   32422 retry.go:31] will retry after 302.870993ms: waiting for machine to come up
	I0815 17:28:37.337510   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:37.337962   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:37.337990   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:37.337907   32422 retry.go:31] will retry after 475.34796ms: waiting for machine to come up
	I0815 17:28:37.814459   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:37.814892   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:37.814920   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:37.814839   32422 retry.go:31] will retry after 512.676016ms: waiting for machine to come up
	I0815 17:28:38.329532   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:38.329864   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:38.329893   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:38.329818   32422 retry.go:31] will retry after 622.237179ms: waiting for machine to come up
	I0815 17:28:38.953579   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:38.953931   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:38.953971   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:38.953895   32422 retry.go:31] will retry after 794.455757ms: waiting for machine to come up
	I0815 17:28:39.749652   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:39.750014   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:39.750039   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:39.749964   32422 retry.go:31] will retry after 1.306931639s: waiting for machine to come up
	I0815 17:28:41.058790   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:41.059117   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:41.059146   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:41.059062   32422 retry.go:31] will retry after 1.852585502s: waiting for machine to come up
	I0815 17:28:42.913929   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:42.914161   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:42.914188   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:42.914122   32422 retry.go:31] will retry after 2.102645836s: waiting for machine to come up
	I0815 17:28:45.018326   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:45.018830   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:45.018858   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:45.018780   32422 retry.go:31] will retry after 2.568960935s: waiting for machine to come up
	I0815 17:28:47.589452   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:47.589768   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:47.589794   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:47.589735   32422 retry.go:31] will retry after 2.187445497s: waiting for machine to come up
	I0815 17:28:49.778302   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:49.778691   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:49.778720   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:49.778651   32422 retry.go:31] will retry after 2.908424791s: waiting for machine to come up
	I0815 17:28:52.689499   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:52.689792   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:52.689819   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:52.689733   32422 retry.go:31] will retry after 5.582171457s: waiting for machine to come up
	I0815 17:28:58.276256   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.276721   32399 main.go:141] libmachine: (ha-683878) Found IP for machine: 192.168.39.17
	I0815 17:28:58.276749   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has current primary IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.276759   32399 main.go:141] libmachine: (ha-683878) Reserving static IP address...
	I0815 17:28:58.277038   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find host DHCP lease matching {name: "ha-683878", mac: "52:54:00:fe:4b:82", ip: "192.168.39.17"} in network mk-ha-683878
	I0815 17:28:58.346012   32399 main.go:141] libmachine: (ha-683878) Reserved static IP address: 192.168.39.17
	I0815 17:28:58.346045   32399 main.go:141] libmachine: (ha-683878) Waiting for SSH to be available...
	I0815 17:28:58.346053   32399 main.go:141] libmachine: (ha-683878) DBG | Getting to WaitForSSH function...
	I0815 17:28:58.349018   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.349481   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.349504   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.349659   32399 main.go:141] libmachine: (ha-683878) DBG | Using SSH client type: external
	I0815 17:28:58.349693   32399 main.go:141] libmachine: (ha-683878) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa (-rw-------)
	I0815 17:28:58.349761   32399 main.go:141] libmachine: (ha-683878) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 17:28:58.349791   32399 main.go:141] libmachine: (ha-683878) DBG | About to run SSH command:
	I0815 17:28:58.349808   32399 main.go:141] libmachine: (ha-683878) DBG | exit 0
	I0815 17:28:58.472261   32399 main.go:141] libmachine: (ha-683878) DBG | SSH cmd err, output: <nil>: 
	I0815 17:28:58.472552   32399 main.go:141] libmachine: (ha-683878) KVM machine creation complete!
	I0815 17:28:58.472835   32399 main.go:141] libmachine: (ha-683878) Calling .GetConfigRaw
	I0815 17:28:58.473309   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:58.473477   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:58.473617   32399 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 17:28:58.473633   32399 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:28:58.474916   32399 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 17:28:58.474936   32399 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 17:28:58.474944   32399 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 17:28:58.474952   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:58.476942   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.477287   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.477310   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.477437   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:58.477612   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.477724   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.477857   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:58.477988   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:28:58.478202   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:28:58.478213   32399 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 17:28:58.575551   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:28:58.575575   32399 main.go:141] libmachine: Detecting the provisioner...
	I0815 17:28:58.575583   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:58.578192   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.578538   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.578565   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.578706   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:58.578890   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.579056   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.579230   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:58.579402   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:28:58.579606   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:28:58.579619   32399 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 17:28:58.681136   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 17:28:58.681242   32399 main.go:141] libmachine: found compatible host: buildroot
	I0815 17:28:58.681252   32399 main.go:141] libmachine: Provisioning with buildroot...
	I0815 17:28:58.681259   32399 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:28:58.681494   32399 buildroot.go:166] provisioning hostname "ha-683878"
	I0815 17:28:58.681518   32399 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:28:58.681725   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:58.684126   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.684515   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.684546   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.684628   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:58.684796   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.684942   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.685046   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:58.685310   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:28:58.685483   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:28:58.685495   32399 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683878 && echo "ha-683878" | sudo tee /etc/hostname
	I0815 17:28:58.804620   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683878
	
	I0815 17:28:58.804650   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:58.807320   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.807700   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.807740   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.807912   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:58.808085   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.808262   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.808388   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:58.808568   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:28:58.808754   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:28:58.808779   32399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683878/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:28:58.917934   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:28:58.917967   32399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 17:28:58.918006   32399 buildroot.go:174] setting up certificates
	I0815 17:28:58.918018   32399 provision.go:84] configureAuth start
	I0815 17:28:58.918030   32399 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:28:58.918284   32399 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:28:58.920820   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.921181   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.921206   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.921272   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:58.923106   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.923501   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.923522   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.923681   32399 provision.go:143] copyHostCerts
	I0815 17:28:58.923721   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:28:58.923779   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 17:28:58.923794   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:28:58.923861   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 17:28:58.923944   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:28:58.923961   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 17:28:58.923968   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:28:58.923992   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 17:28:58.924044   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:28:58.924061   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 17:28:58.924067   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:28:58.924121   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 17:28:58.924183   32399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.ha-683878 san=[127.0.0.1 192.168.39.17 ha-683878 localhost minikube]
	I0815 17:28:59.216173   32399 provision.go:177] copyRemoteCerts
	I0815 17:28:59.216225   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:28:59.216247   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:59.218649   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.218925   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.218952   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.219116   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:59.219296   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.219540   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:59.219697   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:28:59.303096   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 17:28:59.303174   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 17:28:59.329729   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 17:28:59.329803   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 17:28:59.352653   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 17:28:59.352731   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 17:28:59.383980   32399 provision.go:87] duration metric: took 465.94572ms to configureAuth
	I0815 17:28:59.384005   32399 buildroot.go:189] setting minikube options for container-runtime
	I0815 17:28:59.384227   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:28:59.384320   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:59.386956   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.387346   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.387380   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.387537   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:59.387712   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.387845   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.387999   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:59.388182   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:28:59.388386   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:28:59.388406   32399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:28:59.667257   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:28:59.667281   32399 main.go:141] libmachine: Checking connection to Docker...
	I0815 17:28:59.667292   32399 main.go:141] libmachine: (ha-683878) Calling .GetURL
	I0815 17:28:59.668468   32399 main.go:141] libmachine: (ha-683878) DBG | Using libvirt version 6000000
	I0815 17:28:59.670585   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.670944   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.670982   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.671050   32399 main.go:141] libmachine: Docker is up and running!
	I0815 17:28:59.671060   32399 main.go:141] libmachine: Reticulating splines...
	I0815 17:28:59.671066   32399 client.go:171] duration metric: took 24.784574398s to LocalClient.Create
	I0815 17:28:59.671089   32399 start.go:167] duration metric: took 24.784644393s to libmachine.API.Create "ha-683878"
	I0815 17:28:59.671101   32399 start.go:293] postStartSetup for "ha-683878" (driver="kvm2")
	I0815 17:28:59.671120   32399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:28:59.671137   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:59.671378   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:28:59.671405   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:59.673342   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.673625   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.673652   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.673778   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:59.673975   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.674150   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:59.674440   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:28:59.755301   32399 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:28:59.759393   32399 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 17:28:59.759426   32399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 17:28:59.759487   32399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 17:28:59.759563   32399 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 17:28:59.759572   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /etc/ssl/certs/202192.pem
	I0815 17:28:59.759660   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:28:59.768798   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:28:59.791446   32399 start.go:296] duration metric: took 120.325971ms for postStartSetup
	I0815 17:28:59.791485   32399 main.go:141] libmachine: (ha-683878) Calling .GetConfigRaw
	I0815 17:28:59.792035   32399 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:28:59.794600   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.794943   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.794970   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.795198   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:28:59.795393   32399 start.go:128] duration metric: took 24.926390331s to createHost
	I0815 17:28:59.795424   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:59.797977   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.798326   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.798361   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.798514   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:59.798686   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.798885   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.799109   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:59.799301   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:28:59.799459   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:28:59.799474   32399 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 17:28:59.901035   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723742939.879404272
	
	I0815 17:28:59.901058   32399 fix.go:216] guest clock: 1723742939.879404272
	I0815 17:28:59.901066   32399 fix.go:229] Guest: 2024-08-15 17:28:59.879404272 +0000 UTC Remote: 2024-08-15 17:28:59.795412333 +0000 UTC m=+25.028306997 (delta=83.991939ms)
	I0815 17:28:59.901120   32399 fix.go:200] guest clock delta is within tolerance: 83.991939ms
	I0815 17:28:59.901125   32399 start.go:83] releasing machines lock for "ha-683878", held for 25.03223627s
	I0815 17:28:59.901144   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:59.901396   32399 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:28:59.903603   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.903923   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.903949   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.904114   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:59.904612   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:59.904814   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:59.904900   32399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:28:59.904937   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:59.905033   32399 ssh_runner.go:195] Run: cat /version.json
	I0815 17:28:59.905058   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:59.907127   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.907468   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.907505   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.907528   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.907584   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:59.907785   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.907866   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.907890   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.907955   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:59.908069   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:59.908123   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:28:59.908212   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.908352   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:59.908482   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:28:59.982274   32399 ssh_runner.go:195] Run: systemctl --version
	I0815 17:29:00.010603   32399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:29:00.173386   32399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 17:29:00.179262   32399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 17:29:00.179328   32399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:29:00.195996   32399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 17:29:00.196018   32399 start.go:495] detecting cgroup driver to use...
	I0815 17:29:00.196090   32399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:29:00.212762   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:29:00.225540   32399 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:29:00.225588   32399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:29:00.239169   32399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:29:00.252624   32399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:29:00.371331   32399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:29:00.532347   32399 docker.go:233] disabling docker service ...
	I0815 17:29:00.532421   32399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:29:00.547210   32399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:29:00.559940   32399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:29:00.671778   32399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:29:00.781500   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:29:00.795997   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:29:00.814573   32399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:29:00.814636   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.825112   32399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:29:00.825188   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.835607   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.845889   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.856124   32399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:29:00.866904   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.877044   32399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.893637   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.904174   32399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:29:00.913740   32399 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 17:29:00.913787   32399 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 17:29:00.927332   32399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:29:00.937108   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:29:01.047868   32399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:29:01.180694   32399 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:29:01.180752   32399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:29:01.185847   32399 start.go:563] Will wait 60s for crictl version
	I0815 17:29:01.185887   32399 ssh_runner.go:195] Run: which crictl
	I0815 17:29:01.189535   32399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:29:01.227446   32399 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 17:29:01.227527   32399 ssh_runner.go:195] Run: crio --version
	I0815 17:29:01.256693   32399 ssh_runner.go:195] Run: crio --version
	I0815 17:29:01.288058   32399 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 17:29:01.289397   32399 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:29:01.291758   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:01.292117   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:29:01.292142   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:01.292296   32399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 17:29:01.296691   32399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:29:01.309238   32399 kubeadm.go:883] updating cluster {Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 17:29:01.309336   32399 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:29:01.309380   32399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:29:01.345370   32399 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 17:29:01.345438   32399 ssh_runner.go:195] Run: which lz4
	I0815 17:29:01.349279   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0815 17:29:01.349352   32399 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 17:29:01.353590   32399 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 17:29:01.353620   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 17:29:02.641678   32399 crio.go:462] duration metric: took 1.292340744s to copy over tarball
	I0815 17:29:02.641734   32399 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 17:29:04.650799   32399 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.009042805s)
	I0815 17:29:04.650821   32399 crio.go:469] duration metric: took 2.009122075s to extract the tarball
	I0815 17:29:04.650828   32399 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 17:29:04.687959   32399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:29:04.732018   32399 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:29:04.732040   32399 cache_images.go:84] Images are preloaded, skipping loading
	I0815 17:29:04.732049   32399 kubeadm.go:934] updating node { 192.168.39.17 8443 v1.31.0 crio true true} ...
	I0815 17:29:04.732185   32399 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:29:04.732267   32399 ssh_runner.go:195] Run: crio config
	I0815 17:29:04.776215   32399 cni.go:84] Creating CNI manager for ""
	I0815 17:29:04.776232   32399 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 17:29:04.776241   32399 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 17:29:04.776266   32399 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.17 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-683878 NodeName:ha-683878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 17:29:04.776440   32399 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-683878"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 17:29:04.776467   32399 kube-vip.go:115] generating kube-vip config ...
	I0815 17:29:04.776535   32399 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 17:29:04.794390   32399 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 17:29:04.794511   32399 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0815 17:29:04.794575   32399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:29:04.804647   32399 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:29:04.804712   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 17:29:04.814079   32399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0815 17:29:04.830492   32399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:29:04.846899   32399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0815 17:29:04.863275   32399 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0815 17:29:04.879299   32399 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 17:29:04.883154   32399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:29:04.896153   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:29:05.008398   32399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:29:05.026462   32399 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878 for IP: 192.168.39.17
	I0815 17:29:05.026485   32399 certs.go:194] generating shared ca certs ...
	I0815 17:29:05.026506   32399 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.026673   32399 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 17:29:05.026724   32399 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 17:29:05.026737   32399 certs.go:256] generating profile certs ...
	I0815 17:29:05.026802   32399 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key
	I0815 17:29:05.026838   32399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.crt with IP's: []
	I0815 17:29:05.243686   32399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.crt ...
	I0815 17:29:05.243713   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.crt: {Name:mka6b0ae4d3b6108f0dde5d6e013160dcf23c1a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.243889   32399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key ...
	I0815 17:29:05.243906   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key: {Name:mk884d016cc8b0e5b7de4262c0afd40292798185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.244004   32399 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.84f93edb
	I0815 17:29:05.244026   32399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.84f93edb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.17 192.168.39.254]
	I0815 17:29:05.345591   32399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.84f93edb ...
	I0815 17:29:05.345617   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.84f93edb: {Name:mkec3bc615edae99a0ab078c330d2505b6f94ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.345790   32399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.84f93edb ...
	I0815 17:29:05.345807   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.84f93edb: {Name:mk289a9480cee4e4b94a92537ac1cfa80a7cf9a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.345899   32399 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.84f93edb -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt
	I0815 17:29:05.346006   32399 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.84f93edb -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key
	I0815 17:29:05.346078   32399 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key
	I0815 17:29:05.346099   32399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt with IP's: []
	I0815 17:29:05.492320   32399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt ...
	I0815 17:29:05.492348   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt: {Name:mk01a3faddbf012a325f4a20b2b1715c093a8885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.492526   32399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key ...
	I0815 17:29:05.492543   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key: {Name:mk0737ef679a14beb8d241632c98c89dd65363db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.492638   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 17:29:05.492662   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 17:29:05.492682   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 17:29:05.492701   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 17:29:05.492721   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 17:29:05.492739   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 17:29:05.492751   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 17:29:05.492768   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 17:29:05.492835   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 17:29:05.492880   32399 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 17:29:05.492894   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 17:29:05.492927   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 17:29:05.492958   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:29:05.492988   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 17:29:05.493044   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:29:05.493080   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /usr/share/ca-certificates/202192.pem
	I0815 17:29:05.493100   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:05.493119   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem -> /usr/share/ca-certificates/20219.pem
	I0815 17:29:05.493679   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:29:05.520195   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:29:05.544164   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:29:05.568703   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 17:29:05.593486   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 17:29:05.618046   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 17:29:05.642052   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:29:05.665957   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:29:05.690404   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 17:29:05.715449   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:29:05.738950   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 17:29:05.771497   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 17:29:05.819020   32399 ssh_runner.go:195] Run: openssl version
	I0815 17:29:05.826728   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 17:29:05.842367   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 17:29:05.847050   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 17:29:05.847138   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 17:29:05.853164   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:29:05.863863   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:29:05.874594   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:05.878999   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:05.879049   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:05.884880   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:29:05.895486   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 17:29:05.906013   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 17:29:05.910976   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 17:29:05.911016   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 17:29:05.916970   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 17:29:05.927441   32399 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:29:05.931725   32399 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 17:29:05.931792   32399 kubeadm.go:392] StartCluster: {Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:29:05.931877   32399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 17:29:05.931914   32399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 17:29:05.970023   32399 cri.go:89] found id: ""
	I0815 17:29:05.970092   32399 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 17:29:05.980972   32399 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 17:29:05.990882   32399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 17:29:06.000633   32399 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 17:29:06.000655   32399 kubeadm.go:157] found existing configuration files:
	
	I0815 17:29:06.000704   32399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 17:29:06.009868   32399 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 17:29:06.009936   32399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 17:29:06.019505   32399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 17:29:06.028653   32399 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 17:29:06.028769   32399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 17:29:06.038132   32399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 17:29:06.046992   32399 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 17:29:06.047037   32399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 17:29:06.055976   32399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 17:29:06.064527   32399 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 17:29:06.064565   32399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 17:29:06.073712   32399 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 17:29:06.175782   32399 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 17:29:06.175999   32399 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 17:29:06.276047   32399 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 17:29:06.276216   32399 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 17:29:06.276346   32399 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 17:29:06.285277   32399 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 17:29:06.444404   32399 out.go:235]   - Generating certificates and keys ...
	I0815 17:29:06.444552   32399 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 17:29:06.444645   32399 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 17:29:06.553231   32399 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 17:29:06.633700   32399 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 17:29:06.800062   32399 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 17:29:07.034589   32399 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 17:29:07.097287   32399 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 17:29:07.097535   32399 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-683878 localhost] and IPs [192.168.39.17 127.0.0.1 ::1]
	I0815 17:29:07.194740   32399 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 17:29:07.194996   32399 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-683878 localhost] and IPs [192.168.39.17 127.0.0.1 ::1]
	I0815 17:29:07.496079   32399 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 17:29:07.810924   32399 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 17:29:08.036559   32399 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 17:29:08.036848   32399 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 17:29:08.161049   32399 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 17:29:08.286279   32399 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 17:29:08.342451   32399 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 17:29:08.771981   32399 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 17:29:08.982305   32399 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 17:29:08.982988   32399 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 17:29:08.986841   32399 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 17:29:08.988740   32399 out.go:235]   - Booting up control plane ...
	I0815 17:29:08.988838   32399 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 17:29:08.988964   32399 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 17:29:08.989697   32399 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 17:29:09.008408   32399 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 17:29:09.014240   32399 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 17:29:09.014299   32399 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 17:29:09.143041   32399 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 17:29:09.143184   32399 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 17:29:09.644268   32399 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.56421ms
	I0815 17:29:09.644370   32399 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 17:29:15.738721   32399 kubeadm.go:310] [api-check] The API server is healthy after 6.097532107s
	I0815 17:29:15.750426   32399 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 17:29:15.763826   32399 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 17:29:15.784883   32399 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 17:29:15.785121   32399 kubeadm.go:310] [mark-control-plane] Marking the node ha-683878 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 17:29:15.799423   32399 kubeadm.go:310] [bootstrap-token] Using token: wla41g.09q7zejczut0pxz8
	I0815 17:29:15.800876   32399 out.go:235]   - Configuring RBAC rules ...
	I0815 17:29:15.800993   32399 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 17:29:15.806024   32399 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 17:29:15.812326   32399 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 17:29:15.815476   32399 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 17:29:15.823202   32399 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 17:29:15.826870   32399 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 17:29:16.145776   32399 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 17:29:16.580969   32399 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 17:29:17.145982   32399 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 17:29:17.146003   32399 kubeadm.go:310] 
	I0815 17:29:17.146068   32399 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 17:29:17.146075   32399 kubeadm.go:310] 
	I0815 17:29:17.146167   32399 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 17:29:17.146192   32399 kubeadm.go:310] 
	I0815 17:29:17.146247   32399 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 17:29:17.146347   32399 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 17:29:17.146432   32399 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 17:29:17.146450   32399 kubeadm.go:310] 
	I0815 17:29:17.146525   32399 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 17:29:17.146538   32399 kubeadm.go:310] 
	I0815 17:29:17.146609   32399 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 17:29:17.146618   32399 kubeadm.go:310] 
	I0815 17:29:17.146689   32399 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 17:29:17.146787   32399 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 17:29:17.146891   32399 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 17:29:17.146904   32399 kubeadm.go:310] 
	I0815 17:29:17.147017   32399 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 17:29:17.147124   32399 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 17:29:17.147132   32399 kubeadm.go:310] 
	I0815 17:29:17.147235   32399 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wla41g.09q7zejczut0pxz8 \
	I0815 17:29:17.147372   32399 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 \
	I0815 17:29:17.147403   32399 kubeadm.go:310] 	--control-plane 
	I0815 17:29:17.147409   32399 kubeadm.go:310] 
	I0815 17:29:17.147528   32399 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 17:29:17.147539   32399 kubeadm.go:310] 
	I0815 17:29:17.147670   32399 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wla41g.09q7zejczut0pxz8 \
	I0815 17:29:17.147847   32399 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 
	I0815 17:29:17.148770   32399 kubeadm.go:310] W0815 17:29:06.157046     850 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:29:17.149063   32399 kubeadm.go:310] W0815 17:29:06.158172     850 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:29:17.149241   32399 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 17:29:17.149286   32399 cni.go:84] Creating CNI manager for ""
	I0815 17:29:17.149301   32399 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 17:29:17.151041   32399 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 17:29:17.152275   32399 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 17:29:17.157233   32399 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 17:29:17.157248   32399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 17:29:17.179278   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 17:29:17.521540   32399 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 17:29:17.521631   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:17.521673   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-683878 minikube.k8s.io/updated_at=2024_08_15T17_29_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=ha-683878 minikube.k8s.io/primary=true
	I0815 17:29:17.709455   32399 ops.go:34] apiserver oom_adj: -16
	I0815 17:29:17.712122   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:18.213088   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:18.713021   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:19.212622   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:19.712707   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:20.213020   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:20.713162   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:20.848805   32399 kubeadm.go:1113] duration metric: took 3.327234503s to wait for elevateKubeSystemPrivileges
	I0815 17:29:20.848841   32399 kubeadm.go:394] duration metric: took 14.917053977s to StartCluster
	I0815 17:29:20.848878   32399 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:20.848957   32399 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:29:20.849640   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:20.849835   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 17:29:20.849849   32399 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:29:20.849870   32399 start.go:241] waiting for startup goroutines ...
	I0815 17:29:20.849884   32399 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 17:29:20.849948   32399 addons.go:69] Setting storage-provisioner=true in profile "ha-683878"
	I0815 17:29:20.849958   32399 addons.go:69] Setting default-storageclass=true in profile "ha-683878"
	I0815 17:29:20.849984   32399 addons.go:234] Setting addon storage-provisioner=true in "ha-683878"
	I0815 17:29:20.850000   32399 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-683878"
	I0815 17:29:20.850014   32399 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:29:20.850346   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:20.850384   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:20.850564   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:29:20.850662   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:20.850706   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:20.864882   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0815 17:29:20.865006   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
	I0815 17:29:20.865421   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:20.865458   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:20.865940   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:20.865953   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:20.866103   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:20.866139   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:20.866273   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:20.866438   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:20.866622   32399 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:29:20.866800   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:20.866836   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:20.868650   32399 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:29:20.868892   32399 kapi.go:59] client config for ha-683878: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.crt", KeyFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key", CAFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 17:29:20.869345   32399 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 17:29:20.869613   32399 addons.go:234] Setting addon default-storageclass=true in "ha-683878"
	I0815 17:29:20.869649   32399 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:29:20.869924   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:20.869961   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:20.881677   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34441
	I0815 17:29:20.882095   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:20.882700   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:20.882726   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:20.883095   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:20.883282   32399 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:29:20.883623   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I0815 17:29:20.884131   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:20.884661   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:20.884680   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:20.885007   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:29:20.885046   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:20.885555   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:20.885617   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:20.887106   32399 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 17:29:20.888541   32399 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:29:20.888553   32399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 17:29:20.888566   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:29:20.891279   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:20.891690   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:29:20.891719   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:20.891801   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:29:20.891957   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:29:20.892101   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:29:20.892191   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:29:20.900393   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0815 17:29:20.900699   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:20.901148   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:20.901168   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:20.901414   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:20.901598   32399 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:29:20.902863   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:29:20.903044   32399 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 17:29:20.903064   32399 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 17:29:20.903080   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:29:20.905254   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:20.905602   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:29:20.905629   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:20.905740   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:29:20.905878   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:29:20.906011   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:29:20.906140   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:29:21.020145   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 17:29:21.024428   32399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:29:21.090739   32399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 17:29:21.725433   32399 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0815 17:29:21.914443   32399 main.go:141] libmachine: Making call to close driver server
	I0815 17:29:21.914473   32399 main.go:141] libmachine: (ha-683878) Calling .Close
	I0815 17:29:21.914486   32399 main.go:141] libmachine: Making call to close driver server
	I0815 17:29:21.914509   32399 main.go:141] libmachine: (ha-683878) Calling .Close
	I0815 17:29:21.914738   32399 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:29:21.914751   32399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:29:21.914759   32399 main.go:141] libmachine: Making call to close driver server
	I0815 17:29:21.914767   32399 main.go:141] libmachine: (ha-683878) Calling .Close
	I0815 17:29:21.914858   32399 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:29:21.914880   32399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:29:21.914899   32399 main.go:141] libmachine: Making call to close driver server
	I0815 17:29:21.914911   32399 main.go:141] libmachine: (ha-683878) Calling .Close
	I0815 17:29:21.914891   32399 main.go:141] libmachine: (ha-683878) DBG | Closing plugin on server side
	I0815 17:29:21.914963   32399 main.go:141] libmachine: (ha-683878) DBG | Closing plugin on server side
	I0815 17:29:21.914964   32399 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:29:21.914994   32399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:29:21.916122   32399 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:29:21.916140   32399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:29:21.916153   32399 main.go:141] libmachine: (ha-683878) DBG | Closing plugin on server side
	I0815 17:29:21.916211   32399 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 17:29:21.916228   32399 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 17:29:21.916330   32399 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0815 17:29:21.916346   32399 round_trippers.go:469] Request Headers:
	I0815 17:29:21.916358   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:29:21.916366   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:29:21.929821   32399 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0815 17:29:21.930343   32399 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0815 17:29:21.930357   32399 round_trippers.go:469] Request Headers:
	I0815 17:29:21.930366   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:29:21.930371   32399 round_trippers.go:473]     Content-Type: application/json
	I0815 17:29:21.930376   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:29:21.933391   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:29:21.933526   32399 main.go:141] libmachine: Making call to close driver server
	I0815 17:29:21.933541   32399 main.go:141] libmachine: (ha-683878) Calling .Close
	I0815 17:29:21.933764   32399 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:29:21.933778   32399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:29:21.933799   32399 main.go:141] libmachine: (ha-683878) DBG | Closing plugin on server side
	I0815 17:29:21.936353   32399 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0815 17:29:21.937522   32399 addons.go:510] duration metric: took 1.087634995s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0815 17:29:21.937552   32399 start.go:246] waiting for cluster config update ...
	I0815 17:29:21.937562   32399 start.go:255] writing updated cluster config ...
	I0815 17:29:21.939000   32399 out.go:201] 
	I0815 17:29:21.940316   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:29:21.940375   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:29:21.941919   32399 out.go:177] * Starting "ha-683878-m02" control-plane node in "ha-683878" cluster
	I0815 17:29:21.943129   32399 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:29:21.943157   32399 cache.go:56] Caching tarball of preloaded images
	I0815 17:29:21.943264   32399 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:29:21.943282   32399 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:29:21.943366   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:29:21.943571   32399 start.go:360] acquireMachinesLock for ha-683878-m02: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:29:21.943622   32399 start.go:364] duration metric: took 26.945µs to acquireMachinesLock for "ha-683878-m02"
	I0815 17:29:21.943643   32399 start.go:93] Provisioning new machine with config: &{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:29:21.943778   32399 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0815 17:29:21.945415   32399 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:29:21.945522   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:21.945550   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:21.959676   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38299
	I0815 17:29:21.960075   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:21.960532   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:21.960554   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:21.960870   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:21.961043   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetMachineName
	I0815 17:29:21.961214   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:21.961389   32399 start.go:159] libmachine.API.Create for "ha-683878" (driver="kvm2")
	I0815 17:29:21.961413   32399 client.go:168] LocalClient.Create starting
	I0815 17:29:21.961439   32399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem
	I0815 17:29:21.961469   32399 main.go:141] libmachine: Decoding PEM data...
	I0815 17:29:21.961483   32399 main.go:141] libmachine: Parsing certificate...
	I0815 17:29:21.961533   32399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem
	I0815 17:29:21.961553   32399 main.go:141] libmachine: Decoding PEM data...
	I0815 17:29:21.961564   32399 main.go:141] libmachine: Parsing certificate...
	I0815 17:29:21.961579   32399 main.go:141] libmachine: Running pre-create checks...
	I0815 17:29:21.961587   32399 main.go:141] libmachine: (ha-683878-m02) Calling .PreCreateCheck
	I0815 17:29:21.961769   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetConfigRaw
	I0815 17:29:21.962172   32399 main.go:141] libmachine: Creating machine...
	I0815 17:29:21.962185   32399 main.go:141] libmachine: (ha-683878-m02) Calling .Create
	I0815 17:29:21.962307   32399 main.go:141] libmachine: (ha-683878-m02) Creating KVM machine...
	I0815 17:29:21.963437   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found existing default KVM network
	I0815 17:29:21.963661   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found existing private KVM network mk-ha-683878
	I0815 17:29:21.963750   32399 main.go:141] libmachine: (ha-683878-m02) Setting up store path in /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02 ...
	I0815 17:29:21.963770   32399 main.go:141] libmachine: (ha-683878-m02) Building disk image from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 17:29:21.963829   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:21.963728   32793 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:29:21.963917   32399 main.go:141] libmachine: (ha-683878-m02) Downloading /home/jenkins/minikube-integration/19450-13013/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 17:29:22.189623   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:22.189489   32793 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa...
	I0815 17:29:22.483552   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:22.483427   32793 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/ha-683878-m02.rawdisk...
	I0815 17:29:22.483580   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Writing magic tar header
	I0815 17:29:22.483590   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Writing SSH key tar header
	I0815 17:29:22.483598   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:22.483552   32793 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02 ...
	I0815 17:29:22.483690   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02
	I0815 17:29:22.483709   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines
	I0815 17:29:22.483718   32399 main.go:141] libmachine: (ha-683878-m02) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02 (perms=drwx------)
	I0815 17:29:22.483743   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:29:22.483768   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013
	I0815 17:29:22.483780   32399 main.go:141] libmachine: (ha-683878-m02) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines (perms=drwxr-xr-x)
	I0815 17:29:22.483800   32399 main.go:141] libmachine: (ha-683878-m02) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube (perms=drwxr-xr-x)
	I0815 17:29:22.483812   32399 main.go:141] libmachine: (ha-683878-m02) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013 (perms=drwxrwxr-x)
	I0815 17:29:22.483825   32399 main.go:141] libmachine: (ha-683878-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 17:29:22.483836   32399 main.go:141] libmachine: (ha-683878-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 17:29:22.483861   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 17:29:22.483871   32399 main.go:141] libmachine: (ha-683878-m02) Creating domain...
	I0815 17:29:22.483877   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home/jenkins
	I0815 17:29:22.483883   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home
	I0815 17:29:22.483888   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Skipping /home - not owner
	I0815 17:29:22.484846   32399 main.go:141] libmachine: (ha-683878-m02) define libvirt domain using xml: 
	I0815 17:29:22.484868   32399 main.go:141] libmachine: (ha-683878-m02) <domain type='kvm'>
	I0815 17:29:22.484879   32399 main.go:141] libmachine: (ha-683878-m02)   <name>ha-683878-m02</name>
	I0815 17:29:22.484896   32399 main.go:141] libmachine: (ha-683878-m02)   <memory unit='MiB'>2200</memory>
	I0815 17:29:22.484906   32399 main.go:141] libmachine: (ha-683878-m02)   <vcpu>2</vcpu>
	I0815 17:29:22.484915   32399 main.go:141] libmachine: (ha-683878-m02)   <features>
	I0815 17:29:22.484925   32399 main.go:141] libmachine: (ha-683878-m02)     <acpi/>
	I0815 17:29:22.484932   32399 main.go:141] libmachine: (ha-683878-m02)     <apic/>
	I0815 17:29:22.484938   32399 main.go:141] libmachine: (ha-683878-m02)     <pae/>
	I0815 17:29:22.484944   32399 main.go:141] libmachine: (ha-683878-m02)     
	I0815 17:29:22.484950   32399 main.go:141] libmachine: (ha-683878-m02)   </features>
	I0815 17:29:22.484957   32399 main.go:141] libmachine: (ha-683878-m02)   <cpu mode='host-passthrough'>
	I0815 17:29:22.484962   32399 main.go:141] libmachine: (ha-683878-m02)   
	I0815 17:29:22.484972   32399 main.go:141] libmachine: (ha-683878-m02)   </cpu>
	I0815 17:29:22.484990   32399 main.go:141] libmachine: (ha-683878-m02)   <os>
	I0815 17:29:22.485007   32399 main.go:141] libmachine: (ha-683878-m02)     <type>hvm</type>
	I0815 17:29:22.485020   32399 main.go:141] libmachine: (ha-683878-m02)     <boot dev='cdrom'/>
	I0815 17:29:22.485030   32399 main.go:141] libmachine: (ha-683878-m02)     <boot dev='hd'/>
	I0815 17:29:22.485042   32399 main.go:141] libmachine: (ha-683878-m02)     <bootmenu enable='no'/>
	I0815 17:29:22.485049   32399 main.go:141] libmachine: (ha-683878-m02)   </os>
	I0815 17:29:22.485055   32399 main.go:141] libmachine: (ha-683878-m02)   <devices>
	I0815 17:29:22.485063   32399 main.go:141] libmachine: (ha-683878-m02)     <disk type='file' device='cdrom'>
	I0815 17:29:22.485072   32399 main.go:141] libmachine: (ha-683878-m02)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/boot2docker.iso'/>
	I0815 17:29:22.485085   32399 main.go:141] libmachine: (ha-683878-m02)       <target dev='hdc' bus='scsi'/>
	I0815 17:29:22.485093   32399 main.go:141] libmachine: (ha-683878-m02)       <readonly/>
	I0815 17:29:22.485104   32399 main.go:141] libmachine: (ha-683878-m02)     </disk>
	I0815 17:29:22.485115   32399 main.go:141] libmachine: (ha-683878-m02)     <disk type='file' device='disk'>
	I0815 17:29:22.485128   32399 main.go:141] libmachine: (ha-683878-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 17:29:22.485141   32399 main.go:141] libmachine: (ha-683878-m02)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/ha-683878-m02.rawdisk'/>
	I0815 17:29:22.485153   32399 main.go:141] libmachine: (ha-683878-m02)       <target dev='hda' bus='virtio'/>
	I0815 17:29:22.485171   32399 main.go:141] libmachine: (ha-683878-m02)     </disk>
	I0815 17:29:22.485190   32399 main.go:141] libmachine: (ha-683878-m02)     <interface type='network'>
	I0815 17:29:22.485201   32399 main.go:141] libmachine: (ha-683878-m02)       <source network='mk-ha-683878'/>
	I0815 17:29:22.485212   32399 main.go:141] libmachine: (ha-683878-m02)       <model type='virtio'/>
	I0815 17:29:22.485218   32399 main.go:141] libmachine: (ha-683878-m02)     </interface>
	I0815 17:29:22.485227   32399 main.go:141] libmachine: (ha-683878-m02)     <interface type='network'>
	I0815 17:29:22.485234   32399 main.go:141] libmachine: (ha-683878-m02)       <source network='default'/>
	I0815 17:29:22.485241   32399 main.go:141] libmachine: (ha-683878-m02)       <model type='virtio'/>
	I0815 17:29:22.485249   32399 main.go:141] libmachine: (ha-683878-m02)     </interface>
	I0815 17:29:22.485260   32399 main.go:141] libmachine: (ha-683878-m02)     <serial type='pty'>
	I0815 17:29:22.485271   32399 main.go:141] libmachine: (ha-683878-m02)       <target port='0'/>
	I0815 17:29:22.485283   32399 main.go:141] libmachine: (ha-683878-m02)     </serial>
	I0815 17:29:22.485299   32399 main.go:141] libmachine: (ha-683878-m02)     <console type='pty'>
	I0815 17:29:22.485308   32399 main.go:141] libmachine: (ha-683878-m02)       <target type='serial' port='0'/>
	I0815 17:29:22.485312   32399 main.go:141] libmachine: (ha-683878-m02)     </console>
	I0815 17:29:22.485317   32399 main.go:141] libmachine: (ha-683878-m02)     <rng model='virtio'>
	I0815 17:29:22.485326   32399 main.go:141] libmachine: (ha-683878-m02)       <backend model='random'>/dev/random</backend>
	I0815 17:29:22.485337   32399 main.go:141] libmachine: (ha-683878-m02)     </rng>
	I0815 17:29:22.485345   32399 main.go:141] libmachine: (ha-683878-m02)     
	I0815 17:29:22.485359   32399 main.go:141] libmachine: (ha-683878-m02)     
	I0815 17:29:22.485372   32399 main.go:141] libmachine: (ha-683878-m02)   </devices>
	I0815 17:29:22.485383   32399 main.go:141] libmachine: (ha-683878-m02) </domain>
	I0815 17:29:22.485399   32399 main.go:141] libmachine: (ha-683878-m02) 
	I0815 17:29:22.491722   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:ba:76:17 in network default
	I0815 17:29:22.492242   32399 main.go:141] libmachine: (ha-683878-m02) Ensuring networks are active...
	I0815 17:29:22.492263   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:22.492926   32399 main.go:141] libmachine: (ha-683878-m02) Ensuring network default is active
	I0815 17:29:22.493249   32399 main.go:141] libmachine: (ha-683878-m02) Ensuring network mk-ha-683878 is active
	I0815 17:29:22.493559   32399 main.go:141] libmachine: (ha-683878-m02) Getting domain xml...
	I0815 17:29:22.494271   32399 main.go:141] libmachine: (ha-683878-m02) Creating domain...
	I0815 17:29:23.710119   32399 main.go:141] libmachine: (ha-683878-m02) Waiting to get IP...
	I0815 17:29:23.710759   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:23.711081   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:23.711101   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:23.711072   32793 retry.go:31] will retry after 262.72363ms: waiting for machine to come up
	I0815 17:29:23.975486   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:23.975928   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:23.975955   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:23.975897   32793 retry.go:31] will retry after 247.473384ms: waiting for machine to come up
	I0815 17:29:24.225431   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:24.225806   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:24.225831   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:24.225773   32793 retry.go:31] will retry after 384.972078ms: waiting for machine to come up
	I0815 17:29:24.612321   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:24.612824   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:24.612840   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:24.612795   32793 retry.go:31] will retry after 518.994074ms: waiting for machine to come up
	I0815 17:29:25.133498   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:25.133957   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:25.133975   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:25.133932   32793 retry.go:31] will retry after 584.32884ms: waiting for machine to come up
	I0815 17:29:25.719541   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:25.719896   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:25.719923   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:25.719849   32793 retry.go:31] will retry after 842.277729ms: waiting for machine to come up
	I0815 17:29:26.563298   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:26.563685   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:26.563716   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:26.563637   32793 retry.go:31] will retry after 746.421072ms: waiting for machine to come up
	I0815 17:29:27.311847   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:27.312238   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:27.312271   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:27.312216   32793 retry.go:31] will retry after 1.160084319s: waiting for machine to come up
	I0815 17:29:28.473590   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:28.474008   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:28.474037   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:28.473971   32793 retry.go:31] will retry after 1.680079708s: waiting for machine to come up
	I0815 17:29:30.156202   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:30.156758   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:30.156790   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:30.156689   32793 retry.go:31] will retry after 1.986616449s: waiting for machine to come up
	I0815 17:29:32.145220   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:32.145625   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:32.145653   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:32.145582   32793 retry.go:31] will retry after 1.99509911s: waiting for machine to come up
	I0815 17:29:34.143673   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:34.144070   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:34.144092   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:34.144021   32793 retry.go:31] will retry after 3.609024527s: waiting for machine to come up
	I0815 17:29:37.754686   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:37.755077   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:37.755135   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:37.755055   32793 retry.go:31] will retry after 3.656239832s: waiting for machine to come up
	I0815 17:29:41.413427   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:41.413718   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:41.413737   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:41.413694   32793 retry.go:31] will retry after 4.461974251s: waiting for machine to come up
	I0815 17:29:45.878653   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:45.879085   32399 main.go:141] libmachine: (ha-683878-m02) Found IP for machine: 192.168.39.232
	I0815 17:29:45.879110   32399 main.go:141] libmachine: (ha-683878-m02) Reserving static IP address...
	I0815 17:29:45.879124   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has current primary IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:45.879624   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find host DHCP lease matching {name: "ha-683878-m02", mac: "52:54:00:85:ab:06", ip: "192.168.39.232"} in network mk-ha-683878
	I0815 17:29:45.948788   32399 main.go:141] libmachine: (ha-683878-m02) Reserved static IP address: 192.168.39.232
	I0815 17:29:45.948813   32399 main.go:141] libmachine: (ha-683878-m02) Waiting for SSH to be available...
	I0815 17:29:45.948822   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Getting to WaitForSSH function...
	I0815 17:29:45.951204   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:45.951628   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:minikube Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:45.951660   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:45.951813   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Using SSH client type: external
	I0815 17:29:45.951831   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa (-rw-------)
	I0815 17:29:45.951863   32399 main.go:141] libmachine: (ha-683878-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 17:29:45.951882   32399 main.go:141] libmachine: (ha-683878-m02) DBG | About to run SSH command:
	I0815 17:29:45.951896   32399 main.go:141] libmachine: (ha-683878-m02) DBG | exit 0
	I0815 17:29:46.072523   32399 main.go:141] libmachine: (ha-683878-m02) DBG | SSH cmd err, output: <nil>: 
	I0815 17:29:46.072743   32399 main.go:141] libmachine: (ha-683878-m02) KVM machine creation complete!
	I0815 17:29:46.073108   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetConfigRaw
	I0815 17:29:46.073642   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:46.073868   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:46.074054   32399 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 17:29:46.074070   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetState
	I0815 17:29:46.075264   32399 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 17:29:46.075280   32399 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 17:29:46.075288   32399 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 17:29:46.075296   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.077684   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.078048   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.078089   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.078236   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:46.078425   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.078601   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.078762   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:46.078922   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:29:46.079097   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0815 17:29:46.079107   32399 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 17:29:46.183648   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:29:46.183666   32399 main.go:141] libmachine: Detecting the provisioner...
	I0815 17:29:46.183673   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.186236   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.186540   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.186564   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.186696   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:46.186864   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.187033   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.187182   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:46.187309   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:29:46.187511   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0815 17:29:46.187522   32399 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 17:29:46.289046   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 17:29:46.289126   32399 main.go:141] libmachine: found compatible host: buildroot
	I0815 17:29:46.289140   32399 main.go:141] libmachine: Provisioning with buildroot...
	I0815 17:29:46.289150   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetMachineName
	I0815 17:29:46.289395   32399 buildroot.go:166] provisioning hostname "ha-683878-m02"
	I0815 17:29:46.289419   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetMachineName
	I0815 17:29:46.289625   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.292225   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.292594   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.292619   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.292796   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:46.292966   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.293120   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.293247   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:46.293418   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:29:46.293595   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0815 17:29:46.293611   32399 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683878-m02 && echo "ha-683878-m02" | sudo tee /etc/hostname
	I0815 17:29:46.410956   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683878-m02
	
	I0815 17:29:46.410983   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.413462   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.413775   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.413803   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.413942   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:46.414120   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.414257   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.414425   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:46.414558   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:29:46.414727   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0815 17:29:46.414743   32399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683878-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683878-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683878-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:29:46.525032   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:29:46.525061   32399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 17:29:46.525080   32399 buildroot.go:174] setting up certificates
	I0815 17:29:46.525088   32399 provision.go:84] configureAuth start
	I0815 17:29:46.525097   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetMachineName
	I0815 17:29:46.525380   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:29:46.527520   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.527851   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.527872   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.528001   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.530027   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.530338   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.530362   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.530457   32399 provision.go:143] copyHostCerts
	I0815 17:29:46.530496   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:29:46.530525   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 17:29:46.530533   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:29:46.530595   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 17:29:46.530665   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:29:46.530682   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 17:29:46.530687   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:29:46.530709   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 17:29:46.530748   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:29:46.530764   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 17:29:46.530769   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:29:46.530787   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 17:29:46.530830   32399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.ha-683878-m02 san=[127.0.0.1 192.168.39.232 ha-683878-m02 localhost minikube]
	I0815 17:29:46.603808   32399 provision.go:177] copyRemoteCerts
	I0815 17:29:46.603862   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:29:46.603885   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.606406   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.606664   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.606690   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.606845   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:46.607007   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.607174   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:46.607311   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	I0815 17:29:46.686765   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 17:29:46.686848   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 17:29:46.714440   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 17:29:46.714513   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 17:29:46.740563   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 17:29:46.740634   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 17:29:46.766101   32399 provision.go:87] duration metric: took 240.999673ms to configureAuth
	I0815 17:29:46.766129   32399 buildroot.go:189] setting minikube options for container-runtime
	I0815 17:29:46.766339   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:29:46.766406   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.769092   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.769406   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.769430   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.769535   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:46.769707   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.769874   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.770015   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:46.770189   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:29:46.770362   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0815 17:29:46.770377   32399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:29:47.035837   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:29:47.035866   32399 main.go:141] libmachine: Checking connection to Docker...
	I0815 17:29:47.035876   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetURL
	I0815 17:29:47.037224   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Using libvirt version 6000000
	I0815 17:29:47.039511   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.039863   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.039891   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.040079   32399 main.go:141] libmachine: Docker is up and running!
	I0815 17:29:47.040093   32399 main.go:141] libmachine: Reticulating splines...
	I0815 17:29:47.040101   32399 client.go:171] duration metric: took 25.078679128s to LocalClient.Create
	I0815 17:29:47.040127   32399 start.go:167] duration metric: took 25.078737115s to libmachine.API.Create "ha-683878"
	I0815 17:29:47.040146   32399 start.go:293] postStartSetup for "ha-683878-m02" (driver="kvm2")
	I0815 17:29:47.040160   32399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:29:47.040181   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:47.040402   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:29:47.040422   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:47.042232   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.042511   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.042539   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.042651   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:47.042803   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:47.042933   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:47.043069   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	I0815 17:29:47.122740   32399 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:29:47.127067   32399 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 17:29:47.127097   32399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 17:29:47.127175   32399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 17:29:47.127259   32399 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 17:29:47.127270   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /etc/ssl/certs/202192.pem
	I0815 17:29:47.127349   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:29:47.136399   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:29:47.161183   32399 start.go:296] duration metric: took 121.024015ms for postStartSetup
	I0815 17:29:47.161234   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetConfigRaw
	I0815 17:29:47.161791   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:29:47.164161   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.164539   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.164562   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.164857   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:29:47.165036   32399 start.go:128] duration metric: took 25.221244837s to createHost
	I0815 17:29:47.165059   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:47.167218   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.167508   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.167534   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.167630   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:47.167829   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:47.167986   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:47.168206   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:47.168380   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:29:47.168594   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0815 17:29:47.168608   32399 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 17:29:47.269418   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723742987.248295185
	
	I0815 17:29:47.269438   32399 fix.go:216] guest clock: 1723742987.248295185
	I0815 17:29:47.269448   32399 fix.go:229] Guest: 2024-08-15 17:29:47.248295185 +0000 UTC Remote: 2024-08-15 17:29:47.165046704 +0000 UTC m=+72.397941365 (delta=83.248481ms)
	I0815 17:29:47.269475   32399 fix.go:200] guest clock delta is within tolerance: 83.248481ms
	I0815 17:29:47.269482   32399 start.go:83] releasing machines lock for "ha-683878-m02", held for 25.325849025s
	I0815 17:29:47.269503   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:47.269773   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:29:47.272069   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.272473   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.272513   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.274690   32399 out.go:177] * Found network options:
	I0815 17:29:47.275926   32399 out.go:177]   - NO_PROXY=192.168.39.17
	W0815 17:29:47.277082   32399 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 17:29:47.277107   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:47.277550   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:47.277746   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:47.277960   32399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0815 17:29:47.277974   32399 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 17:29:47.278006   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:47.278044   32399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:29:47.278062   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:47.280307   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.280618   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.280646   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.280744   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:47.280853   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.280927   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:47.281109   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:47.281289   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.281310   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.281310   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	I0815 17:29:47.281492   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:47.281635   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:47.281781   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:47.281957   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	I0815 17:29:47.515530   32399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 17:29:47.522744   32399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 17:29:47.522812   32399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:29:47.539055   32399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 17:29:47.539076   32399 start.go:495] detecting cgroup driver to use...
	I0815 17:29:47.539150   32399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:29:47.554077   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:29:47.568541   32399 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:29:47.568586   32399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:29:47.582023   32399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:29:47.596357   32399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:29:47.712007   32399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:29:47.860743   32399 docker.go:233] disabling docker service ...
	I0815 17:29:47.860809   32399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:29:47.875352   32399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:29:47.888137   32399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:29:48.018622   32399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:29:48.148043   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:29:48.161831   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:29:48.179989   32399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:29:48.180042   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.190999   32399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:29:48.191066   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.201369   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.211934   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.222160   32399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:29:48.232612   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.243359   32399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.260510   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.270772   32399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:29:48.280123   32399 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 17:29:48.280168   32399 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 17:29:48.293848   32399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:29:48.302741   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:29:48.435828   32399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:29:48.589360   32399 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:29:48.589426   32399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:29:48.594369   32399 start.go:563] Will wait 60s for crictl version
	I0815 17:29:48.594428   32399 ssh_runner.go:195] Run: which crictl
	I0815 17:29:48.598223   32399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:29:48.646876   32399 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 17:29:48.646947   32399 ssh_runner.go:195] Run: crio --version
	I0815 17:29:48.681369   32399 ssh_runner.go:195] Run: crio --version
	I0815 17:29:48.714467   32399 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 17:29:48.715567   32399 out.go:177]   - env NO_PROXY=192.168.39.17
	I0815 17:29:48.716731   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:29:48.719344   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:48.719801   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:48.719829   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:48.720036   32399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 17:29:48.724723   32399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:29:48.737127   32399 mustload.go:65] Loading cluster: ha-683878
	I0815 17:29:48.737342   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:29:48.737704   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:48.737734   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:48.751772   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38287
	I0815 17:29:48.752196   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:48.752663   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:48.752686   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:48.752989   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:48.753182   32399 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:29:48.754599   32399 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:29:48.754985   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:48.755028   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:48.768922   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37307
	I0815 17:29:48.769249   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:48.769642   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:48.769661   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:48.769914   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:48.770078   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:29:48.770229   32399 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878 for IP: 192.168.39.232
	I0815 17:29:48.770244   32399 certs.go:194] generating shared ca certs ...
	I0815 17:29:48.770260   32399 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:48.770399   32399 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 17:29:48.770448   32399 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 17:29:48.770464   32399 certs.go:256] generating profile certs ...
	I0815 17:29:48.770559   32399 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key
	I0815 17:29:48.770590   32399 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.faf4606f
	I0815 17:29:48.770608   32399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.faf4606f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.17 192.168.39.232 192.168.39.254]
	I0815 17:29:49.003509   32399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.faf4606f ...
	I0815 17:29:49.003550   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.faf4606f: {Name:mk9b4d24b176a74aaa3c6d56b9fc54abe622fa6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:49.003731   32399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.faf4606f ...
	I0815 17:29:49.003746   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.faf4606f: {Name:mk72d614c186e223591fe67bed0c6e945b20bee6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:49.003821   32399 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.faf4606f -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt
	I0815 17:29:49.003952   32399 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.faf4606f -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key
	I0815 17:29:49.004079   32399 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key
	I0815 17:29:49.004094   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 17:29:49.004107   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 17:29:49.004119   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 17:29:49.004132   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 17:29:49.004145   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 17:29:49.004157   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 17:29:49.004167   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 17:29:49.004179   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 17:29:49.004225   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 17:29:49.004254   32399 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 17:29:49.004263   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 17:29:49.004285   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 17:29:49.004308   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:29:49.004330   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 17:29:49.004366   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:29:49.004394   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:49.004408   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem -> /usr/share/ca-certificates/20219.pem
	I0815 17:29:49.004422   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /usr/share/ca-certificates/202192.pem
	I0815 17:29:49.004452   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:29:49.007270   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:49.007676   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:29:49.007704   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:49.007892   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:29:49.008045   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:29:49.008177   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:29:49.008302   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:29:49.076853   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 17:29:49.081397   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 17:29:49.092530   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 17:29:49.096710   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 17:29:49.111800   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 17:29:49.121752   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 17:29:49.134310   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 17:29:49.138987   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 17:29:49.151077   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 17:29:49.155430   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 17:29:49.166575   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 17:29:49.171681   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 17:29:49.189127   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:29:49.217832   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:29:49.243283   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:29:49.268540   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 17:29:49.291304   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0815 17:29:49.315317   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 17:29:49.339192   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:29:49.363621   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:29:49.387021   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:29:49.413451   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 17:29:49.436995   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 17:29:49.464385   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 17:29:49.482364   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 17:29:49.499948   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 17:29:49.517811   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 17:29:49.535604   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 17:29:49.553537   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 17:29:49.572141   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 17:29:49.590686   32399 ssh_runner.go:195] Run: openssl version
	I0815 17:29:49.596675   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 17:29:49.607790   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 17:29:49.612457   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 17:29:49.612512   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 17:29:49.618479   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 17:29:49.629534   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 17:29:49.640409   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 17:29:49.644843   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 17:29:49.644886   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 17:29:49.650947   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:29:49.661767   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:29:49.672322   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:49.677324   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:49.677425   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:49.683052   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:29:49.693489   32399 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:29:49.697544   32399 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 17:29:49.697590   32399 kubeadm.go:934] updating node {m02 192.168.39.232 8443 v1.31.0 crio true true} ...
	I0815 17:29:49.697676   32399 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683878-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:29:49.697706   32399 kube-vip.go:115] generating kube-vip config ...
	I0815 17:29:49.697739   32399 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 17:29:49.713566   32399 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 17:29:49.713656   32399 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 17:29:49.713717   32399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:29:49.724044   32399 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0815 17:29:49.724103   32399 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0815 17:29:49.735786   32399 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0815 17:29:49.735817   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 17:29:49.735818   32399 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0815 17:29:49.735828   32399 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0815 17:29:49.735893   32399 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 17:29:49.740251   32399 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0815 17:29:49.740277   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0815 17:30:32.983649   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 17:30:32.983736   32399 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 17:30:32.991064   32399 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0815 17:30:32.991097   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0815 17:30:44.468061   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:30:44.483663   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 17:30:44.483769   32399 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 17:30:44.488170   32399 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0815 17:30:44.488205   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0815 17:30:44.807916   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 17:30:44.818008   32399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0815 17:30:44.834894   32399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:30:44.852162   32399 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 17:30:44.868384   32399 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 17:30:44.872949   32399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:30:44.885070   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:30:45.018161   32399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:30:45.035336   32399 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:30:45.035674   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:30:45.035708   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:30:45.050682   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0815 17:30:45.051061   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:30:45.051458   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:30:45.051477   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:30:45.051763   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:30:45.051952   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:30:45.052130   32399 start.go:317] joinCluster: &{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:30:45.052260   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0815 17:30:45.052282   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:30:45.055414   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:30:45.055809   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:30:45.055841   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:30:45.056090   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:30:45.056283   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:30:45.056449   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:30:45.056605   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:30:45.218795   32399 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:30:45.218836   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv6pe0.d3ubsmvhon2dbywh --discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683878-m02 --control-plane --apiserver-advertise-address=192.168.39.232 --apiserver-bind-port=8443"
	I0815 17:31:04.724973   32399 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv6pe0.d3ubsmvhon2dbywh --discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683878-m02 --control-plane --apiserver-advertise-address=192.168.39.232 --apiserver-bind-port=8443": (19.506108229s)
	I0815 17:31:04.725004   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0815 17:31:05.278404   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-683878-m02 minikube.k8s.io/updated_at=2024_08_15T17_31_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=ha-683878 minikube.k8s.io/primary=false
	I0815 17:31:05.420275   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-683878-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0815 17:31:05.549212   32399 start.go:319] duration metric: took 20.497080312s to joinCluster
	I0815 17:31:05.549300   32399 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:31:05.549584   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:31:05.550772   32399 out.go:177] * Verifying Kubernetes components...
	I0815 17:31:05.551934   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:31:05.807088   32399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:31:05.877001   32399 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:31:05.877276   32399 kapi.go:59] client config for ha-683878: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.crt", KeyFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key", CAFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 17:31:05.877345   32399 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.17:8443
	I0815 17:31:05.877595   32399 node_ready.go:35] waiting up to 6m0s for node "ha-683878-m02" to be "Ready" ...
	I0815 17:31:05.877697   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:05.877708   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:05.877716   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:05.877721   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:05.909815   32399 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0815 17:31:06.377785   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:06.377807   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:06.377819   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:06.377824   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:06.384312   32399 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 17:31:06.878784   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:06.878808   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:06.878816   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:06.878822   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:06.884587   32399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 17:31:07.378460   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:07.378483   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:07.378491   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:07.378496   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:07.382403   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:07.878654   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:07.878676   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:07.878685   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:07.878693   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:07.941504   32399 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I0815 17:31:07.943250   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:08.378618   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:08.378638   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:08.378647   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:08.378650   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:08.382939   32399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 17:31:08.877735   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:08.877756   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:08.877764   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:08.877769   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:08.881592   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:09.378262   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:09.378282   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:09.378293   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:09.378298   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:09.381556   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:09.877768   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:09.877791   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:09.877799   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:09.877802   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:09.881146   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:10.378773   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:10.378795   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:10.378806   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:10.378810   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:10.381868   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:10.382620   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:10.877936   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:10.877965   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:10.877978   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:10.877986   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:10.881408   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:11.377977   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:11.378001   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:11.378013   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:11.378017   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:11.381359   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:11.878825   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:11.878852   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:11.878864   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:11.878874   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:11.882329   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:12.377799   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:12.377892   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:12.377913   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:12.377925   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:12.392435   32399 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0815 17:31:12.393503   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:12.877747   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:12.877766   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:12.877774   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:12.877778   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:12.880867   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:13.377922   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:13.377946   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:13.377955   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:13.377959   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:13.381623   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:13.878175   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:13.878197   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:13.878205   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:13.878209   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:13.881562   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:14.378606   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:14.378632   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:14.378644   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:14.378652   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:14.382072   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:14.878502   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:14.878527   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:14.878534   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:14.878539   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:14.881964   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:14.882629   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:15.377784   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:15.377805   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:15.377814   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:15.377818   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:15.381157   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:15.878238   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:15.878262   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:15.878270   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:15.878273   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:15.882003   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:16.377958   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:16.377986   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:16.377998   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:16.378003   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:16.381608   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:16.878275   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:16.878301   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:16.878312   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:16.878318   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:16.881211   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:17.378777   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:17.378800   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:17.378810   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:17.378814   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:17.382275   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:17.382984   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:17.878364   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:17.878385   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:17.878392   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:17.878400   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:17.881699   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:18.378569   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:18.378590   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:18.378597   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:18.378601   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:18.381821   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:18.878793   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:18.878818   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:18.878826   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:18.878831   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:18.882150   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:19.378233   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:19.378257   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:19.378267   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:19.378274   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:19.381782   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:19.877813   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:19.877835   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:19.877845   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:19.877852   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:19.881346   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:19.881959   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:20.378057   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:20.378089   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:20.378097   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:20.378101   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:20.381238   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:20.878689   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:20.878712   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:20.878720   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:20.878725   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:20.882140   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:21.378158   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:21.378186   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:21.378197   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:21.378200   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:21.381672   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:21.878435   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:21.878462   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:21.878473   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:21.878480   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:21.881543   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:21.882310   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:22.378428   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:22.378452   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:22.378463   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:22.378469   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:22.381974   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:22.877776   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:22.877797   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:22.877805   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:22.877810   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:22.881437   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:23.378541   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:23.378563   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:23.378571   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:23.378576   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:23.381756   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:23.878715   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:23.878737   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:23.878744   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:23.878748   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:23.882112   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:23.882629   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:24.377972   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:24.378000   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.378022   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.378031   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.380977   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.381546   32399 node_ready.go:49] node "ha-683878-m02" has status "Ready":"True"
	I0815 17:31:24.381563   32399 node_ready.go:38] duration metric: took 18.503951636s for node "ha-683878-m02" to be "Ready" ...
	I0815 17:31:24.381571   32399 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:31:24.381635   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:31:24.381643   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.381650   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.381655   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.385491   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:24.393320   32399 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-c5mlj" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.393407   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-c5mlj
	I0815 17:31:24.393419   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.393428   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.393433   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.396623   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:24.397383   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:24.397396   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.397403   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.397406   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.399814   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.400377   32399 pod_ready.go:93] pod "coredns-6f6b679f8f-c5mlj" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:24.400401   32399 pod_ready.go:82] duration metric: took 7.055742ms for pod "coredns-6f6b679f8f-c5mlj" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.400413   32399 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-kfczp" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.400472   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-kfczp
	I0815 17:31:24.400482   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.400507   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.400519   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.402522   32399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:31:24.403273   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:24.403288   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.403294   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.403300   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.405426   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.406015   32399 pod_ready.go:93] pod "coredns-6f6b679f8f-kfczp" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:24.406034   32399 pod_ready.go:82] duration metric: took 5.613674ms for pod "coredns-6f6b679f8f-kfczp" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.406047   32399 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.406103   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683878
	I0815 17:31:24.406113   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.406123   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.406129   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.408178   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.408621   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:24.408633   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.408639   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.408645   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.411050   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.411451   32399 pod_ready.go:93] pod "etcd-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:24.411463   32399 pod_ready.go:82] duration metric: took 5.409665ms for pod "etcd-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.411470   32399 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.411506   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683878-m02
	I0815 17:31:24.411513   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.411519   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.411525   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.414256   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.415219   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:24.415231   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.415237   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.415242   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.417101   32399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:31:24.417673   32399 pod_ready.go:93] pod "etcd-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:24.417691   32399 pod_ready.go:82] duration metric: took 6.215712ms for pod "etcd-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.417703   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.578027   32399 request.go:632] Waited for 160.263351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878
	I0815 17:31:24.578084   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878
	I0815 17:31:24.578090   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.578100   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.578109   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.581871   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:24.778898   32399 request.go:632] Waited for 196.360876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:24.778945   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:24.778949   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.778975   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.778981   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.781919   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.782450   32399 pod_ready.go:93] pod "kube-apiserver-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:24.782468   32399 pod_ready.go:82] duration metric: took 364.758957ms for pod "kube-apiserver-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.782478   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.978201   32399 request.go:632] Waited for 195.643943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878-m02
	I0815 17:31:24.978257   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878-m02
	I0815 17:31:24.978262   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.978271   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.978274   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.981594   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:25.178827   32399 request.go:632] Waited for 196.398405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:25.178907   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:25.178916   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:25.178924   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:25.178931   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:25.181476   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:25.182346   32399 pod_ready.go:93] pod "kube-apiserver-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:25.182365   32399 pod_ready.go:82] duration metric: took 399.878796ms for pod "kube-apiserver-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:25.182375   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:25.378488   32399 request.go:632] Waited for 196.025457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878
	I0815 17:31:25.378611   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878
	I0815 17:31:25.378624   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:25.378637   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:25.378644   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:25.382024   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:25.578988   32399 request.go:632] Waited for 196.379866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:25.579052   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:25.579060   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:25.579071   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:25.579077   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:25.582263   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:25.582801   32399 pod_ready.go:93] pod "kube-controller-manager-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:25.582817   32399 pod_ready.go:82] duration metric: took 400.436209ms for pod "kube-controller-manager-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:25.582826   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:25.778943   32399 request.go:632] Waited for 196.055441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878-m02
	I0815 17:31:25.779009   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878-m02
	I0815 17:31:25.779014   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:25.779022   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:25.779028   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:25.782312   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:25.978321   32399 request.go:632] Waited for 195.368316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:25.978371   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:25.978376   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:25.978384   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:25.978392   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:25.981546   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:25.982137   32399 pod_ready.go:93] pod "kube-controller-manager-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:25.982154   32399 pod_ready.go:82] duration metric: took 399.321147ms for pod "kube-controller-manager-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:25.982168   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-89p4v" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:26.178409   32399 request.go:632] Waited for 196.141898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89p4v
	I0815 17:31:26.178472   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89p4v
	I0815 17:31:26.178480   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:26.178491   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:26.178504   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:26.181996   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:26.379038   32399 request.go:632] Waited for 196.398272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:26.379118   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:26.379124   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:26.379134   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:26.379150   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:26.382230   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:26.382716   32399 pod_ready.go:93] pod "kube-proxy-89p4v" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:26.382733   32399 pod_ready.go:82] duration metric: took 400.551386ms for pod "kube-proxy-89p4v" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:26.382743   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s9hw4" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:26.578961   32399 request.go:632] Waited for 196.131977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9hw4
	I0815 17:31:26.579028   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9hw4
	I0815 17:31:26.579036   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:26.579046   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:26.579056   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:26.581979   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:26.779018   32399 request.go:632] Waited for 196.364938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:26.779076   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:26.779083   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:26.779092   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:26.779100   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:26.782152   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:26.782737   32399 pod_ready.go:93] pod "kube-proxy-s9hw4" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:26.782752   32399 pod_ready.go:82] duration metric: took 400.003294ms for pod "kube-proxy-s9hw4" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:26.782762   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:26.978870   32399 request.go:632] Waited for 196.03424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878
	I0815 17:31:26.978922   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878
	I0815 17:31:26.978927   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:26.978934   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:26.978938   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:26.982257   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:27.178070   32399 request.go:632] Waited for 195.308344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:27.178126   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:27.178131   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:27.178146   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:27.178165   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:27.182717   32399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 17:31:27.183320   32399 pod_ready.go:93] pod "kube-scheduler-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:27.183339   32399 pod_ready.go:82] duration metric: took 400.572354ms for pod "kube-scheduler-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:27.183349   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:27.378379   32399 request.go:632] Waited for 194.971084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878-m02
	I0815 17:31:27.378465   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878-m02
	I0815 17:31:27.378474   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:27.378490   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:27.378499   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:27.382012   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:27.579011   32399 request.go:632] Waited for 196.360788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:27.579097   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:27.579103   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:27.579111   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:27.579119   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:27.582296   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:27.583177   32399 pod_ready.go:93] pod "kube-scheduler-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:27.583203   32399 pod_ready.go:82] duration metric: took 399.846324ms for pod "kube-scheduler-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:27.583218   32399 pod_ready.go:39] duration metric: took 3.201632019s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:31:27.583247   32399 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:31:27.583302   32399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:31:27.599424   32399 api_server.go:72] duration metric: took 22.050081502s to wait for apiserver process to appear ...
	I0815 17:31:27.599446   32399 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:31:27.599473   32399 api_server.go:253] Checking apiserver healthz at https://192.168.39.17:8443/healthz ...
	I0815 17:31:27.603735   32399 api_server.go:279] https://192.168.39.17:8443/healthz returned 200:
	ok
	I0815 17:31:27.603811   32399 round_trippers.go:463] GET https://192.168.39.17:8443/version
	I0815 17:31:27.603822   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:27.603832   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:27.603840   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:27.604623   32399 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 17:31:27.604741   32399 api_server.go:141] control plane version: v1.31.0
	I0815 17:31:27.604759   32399 api_server.go:131] duration metric: took 5.305274ms to wait for apiserver health ...
	I0815 17:31:27.604768   32399 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 17:31:27.778083   32399 request.go:632] Waited for 173.246664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:31:27.778137   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:31:27.778142   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:27.778150   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:27.778152   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:27.782656   32399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 17:31:27.787187   32399 system_pods.go:59] 17 kube-system pods found
	I0815 17:31:27.787235   32399 system_pods.go:61] "coredns-6f6b679f8f-c5mlj" [24146559-ea1d-42db-9f61-730ed436dea8] Running
	I0815 17:31:27.787245   32399 system_pods.go:61] "coredns-6f6b679f8f-kfczp" [5d18cfeb-ccfe-4432-b999-510d84438c7a] Running
	I0815 17:31:27.787251   32399 system_pods.go:61] "etcd-ha-683878" [89164a36-1867-4d3e-8b16-4b6e3f5735d9] Running
	I0815 17:31:27.787257   32399 system_pods.go:61] "etcd-ha-683878-m02" [ffd47718-50f2-42b0-8759-390d981a69b8] Running
	I0815 17:31:27.787262   32399 system_pods.go:61] "kindnet-g8lqf" [bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e] Running
	I0815 17:31:27.787268   32399 system_pods.go:61] "kindnet-z5z9h" [525522f9-4aef-49ae-9f3f-02960fe82bff] Running
	I0815 17:31:27.787275   32399 system_pods.go:61] "kube-apiserver-ha-683878" [265e1832-cd30-4ba1-9aa5-5e18cd71e8f0] Running
	I0815 17:31:27.787279   32399 system_pods.go:61] "kube-apiserver-ha-683878-m02" [bff6c9d5-5c64-4220-9a17-f3f08b8e5dab] Running
	I0815 17:31:27.787287   32399 system_pods.go:61] "kube-controller-manager-ha-683878" [e958c9a5-cf23-4d1a-bf25-ab03393607cb] Running
	I0815 17:31:27.787290   32399 system_pods.go:61] "kube-controller-manager-ha-683878-m02" [fa5ae940-8a2a-4a4c-950c-5fe267cddc2d] Running
	I0815 17:31:27.787293   32399 system_pods.go:61] "kube-proxy-89p4v" [58c774bf-7b9a-46ad-8d85-81df9b68415a] Running
	I0815 17:31:27.787296   32399 system_pods.go:61] "kube-proxy-s9hw4" [f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1] Running
	I0815 17:31:27.787299   32399 system_pods.go:61] "kube-scheduler-ha-683878" [fe51d20e-6174-48c9-b170-2eff952a4975] Running
	I0815 17:31:27.787303   32399 system_pods.go:61] "kube-scheduler-ha-683878-m02" [bb94ccf5-231f-4bb5-903d-8664be14bc58] Running
	I0815 17:31:27.787306   32399 system_pods.go:61] "kube-vip-ha-683878" [9c4a5acc-022d-4756-a0c4-6a867b22f0bb] Running
	I0815 17:31:27.787309   32399 system_pods.go:61] "kube-vip-ha-683878-m02" [041e7349-ab7d-4b80-9f0d-ea92f61d637b] Running
	I0815 17:31:27.787312   32399 system_pods.go:61] "storage-provisioner" [78d884cc-a5c3-4f94-b643-b6593cb3f622] Running
	I0815 17:31:27.787318   32399 system_pods.go:74] duration metric: took 182.543913ms to wait for pod list to return data ...
	I0815 17:31:27.787325   32399 default_sa.go:34] waiting for default service account to be created ...
	I0815 17:31:27.978749   32399 request.go:632] Waited for 191.333158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/default/serviceaccounts
	I0815 17:31:27.978827   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/default/serviceaccounts
	I0815 17:31:27.978833   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:27.978840   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:27.978844   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:27.982849   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:27.983158   32399 default_sa.go:45] found service account: "default"
	I0815 17:31:27.983178   32399 default_sa.go:55] duration metric: took 195.845847ms for default service account to be created ...
	I0815 17:31:27.983186   32399 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 17:31:28.178628   32399 request.go:632] Waited for 195.36887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:31:28.178691   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:31:28.178698   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:28.178710   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:28.178715   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:28.184296   32399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 17:31:28.188703   32399 system_pods.go:86] 17 kube-system pods found
	I0815 17:31:28.188726   32399 system_pods.go:89] "coredns-6f6b679f8f-c5mlj" [24146559-ea1d-42db-9f61-730ed436dea8] Running
	I0815 17:31:28.188733   32399 system_pods.go:89] "coredns-6f6b679f8f-kfczp" [5d18cfeb-ccfe-4432-b999-510d84438c7a] Running
	I0815 17:31:28.188737   32399 system_pods.go:89] "etcd-ha-683878" [89164a36-1867-4d3e-8b16-4b6e3f5735d9] Running
	I0815 17:31:28.188741   32399 system_pods.go:89] "etcd-ha-683878-m02" [ffd47718-50f2-42b0-8759-390d981a69b8] Running
	I0815 17:31:28.188745   32399 system_pods.go:89] "kindnet-g8lqf" [bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e] Running
	I0815 17:31:28.188748   32399 system_pods.go:89] "kindnet-z5z9h" [525522f9-4aef-49ae-9f3f-02960fe82bff] Running
	I0815 17:31:28.188751   32399 system_pods.go:89] "kube-apiserver-ha-683878" [265e1832-cd30-4ba1-9aa5-5e18cd71e8f0] Running
	I0815 17:31:28.188755   32399 system_pods.go:89] "kube-apiserver-ha-683878-m02" [bff6c9d5-5c64-4220-9a17-f3f08b8e5dab] Running
	I0815 17:31:28.188759   32399 system_pods.go:89] "kube-controller-manager-ha-683878" [e958c9a5-cf23-4d1a-bf25-ab03393607cb] Running
	I0815 17:31:28.188762   32399 system_pods.go:89] "kube-controller-manager-ha-683878-m02" [fa5ae940-8a2a-4a4c-950c-5fe267cddc2d] Running
	I0815 17:31:28.188765   32399 system_pods.go:89] "kube-proxy-89p4v" [58c774bf-7b9a-46ad-8d85-81df9b68415a] Running
	I0815 17:31:28.188769   32399 system_pods.go:89] "kube-proxy-s9hw4" [f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1] Running
	I0815 17:31:28.188773   32399 system_pods.go:89] "kube-scheduler-ha-683878" [fe51d20e-6174-48c9-b170-2eff952a4975] Running
	I0815 17:31:28.188777   32399 system_pods.go:89] "kube-scheduler-ha-683878-m02" [bb94ccf5-231f-4bb5-903d-8664be14bc58] Running
	I0815 17:31:28.188781   32399 system_pods.go:89] "kube-vip-ha-683878" [9c4a5acc-022d-4756-a0c4-6a867b22f0bb] Running
	I0815 17:31:28.188783   32399 system_pods.go:89] "kube-vip-ha-683878-m02" [041e7349-ab7d-4b80-9f0d-ea92f61d637b] Running
	I0815 17:31:28.188786   32399 system_pods.go:89] "storage-provisioner" [78d884cc-a5c3-4f94-b643-b6593cb3f622] Running
	I0815 17:31:28.188792   32399 system_pods.go:126] duration metric: took 205.601444ms to wait for k8s-apps to be running ...
	I0815 17:31:28.188807   32399 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 17:31:28.188848   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:31:28.203886   32399 system_svc.go:56] duration metric: took 15.072972ms WaitForService to wait for kubelet
	I0815 17:31:28.203906   32399 kubeadm.go:582] duration metric: took 22.654565633s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:31:28.203923   32399 node_conditions.go:102] verifying NodePressure condition ...
	I0815 17:31:28.378303   32399 request.go:632] Waited for 174.316248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes
	I0815 17:31:28.378368   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes
	I0815 17:31:28.378373   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:28.378381   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:28.378390   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:28.382309   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:28.383084   32399 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 17:31:28.383108   32399 node_conditions.go:123] node cpu capacity is 2
	I0815 17:31:28.383120   32399 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 17:31:28.383125   32399 node_conditions.go:123] node cpu capacity is 2
	I0815 17:31:28.383129   32399 node_conditions.go:105] duration metric: took 179.202113ms to run NodePressure ...
	I0815 17:31:28.383140   32399 start.go:241] waiting for startup goroutines ...
	I0815 17:31:28.383161   32399 start.go:255] writing updated cluster config ...
	I0815 17:31:28.385481   32399 out.go:201] 
	I0815 17:31:28.386981   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:31:28.387062   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:31:28.388679   32399 out.go:177] * Starting "ha-683878-m03" control-plane node in "ha-683878" cluster
	I0815 17:31:28.389829   32399 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:31:28.389850   32399 cache.go:56] Caching tarball of preloaded images
	I0815 17:31:28.389955   32399 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:31:28.389968   32399 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:31:28.390045   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:31:28.390206   32399 start.go:360] acquireMachinesLock for ha-683878-m03: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:31:28.390248   32399 start.go:364] duration metric: took 23.302µs to acquireMachinesLock for "ha-683878-m03"
	I0815 17:31:28.390270   32399 start.go:93] Provisioning new machine with config: &{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:31:28.390353   32399 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0815 17:31:28.391973   32399 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:31:28.392052   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:31:28.392085   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:31:28.407053   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37249
	I0815 17:31:28.407503   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:31:28.407917   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:31:28.407934   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:31:28.408205   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:31:28.408366   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetMachineName
	I0815 17:31:28.408515   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:28.408642   32399 start.go:159] libmachine.API.Create for "ha-683878" (driver="kvm2")
	I0815 17:31:28.408671   32399 client.go:168] LocalClient.Create starting
	I0815 17:31:28.408703   32399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem
	I0815 17:31:28.408740   32399 main.go:141] libmachine: Decoding PEM data...
	I0815 17:31:28.408763   32399 main.go:141] libmachine: Parsing certificate...
	I0815 17:31:28.408826   32399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem
	I0815 17:31:28.408852   32399 main.go:141] libmachine: Decoding PEM data...
	I0815 17:31:28.408869   32399 main.go:141] libmachine: Parsing certificate...
	I0815 17:31:28.408896   32399 main.go:141] libmachine: Running pre-create checks...
	I0815 17:31:28.408909   32399 main.go:141] libmachine: (ha-683878-m03) Calling .PreCreateCheck
	I0815 17:31:28.409034   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetConfigRaw
	I0815 17:31:28.409344   32399 main.go:141] libmachine: Creating machine...
	I0815 17:31:28.409358   32399 main.go:141] libmachine: (ha-683878-m03) Calling .Create
	I0815 17:31:28.409457   32399 main.go:141] libmachine: (ha-683878-m03) Creating KVM machine...
	I0815 17:31:28.410578   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found existing default KVM network
	I0815 17:31:28.410708   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found existing private KVM network mk-ha-683878
	I0815 17:31:28.410885   32399 main.go:141] libmachine: (ha-683878-m03) Setting up store path in /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03 ...
	I0815 17:31:28.410909   32399 main.go:141] libmachine: (ha-683878-m03) Building disk image from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 17:31:28.410966   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:28.410873   33363 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:31:28.411042   32399 main.go:141] libmachine: (ha-683878-m03) Downloading /home/jenkins/minikube-integration/19450-13013/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 17:31:28.631760   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:28.631601   33363 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa...
	I0815 17:31:28.717652   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:28.717528   33363 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/ha-683878-m03.rawdisk...
	I0815 17:31:28.717687   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Writing magic tar header
	I0815 17:31:28.717701   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Writing SSH key tar header
	I0815 17:31:28.717713   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:28.717641   33363 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03 ...
	I0815 17:31:28.717732   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03
	I0815 17:31:28.717808   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines
	I0815 17:31:28.717828   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:31:28.717837   32399 main.go:141] libmachine: (ha-683878-m03) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03 (perms=drwx------)
	I0815 17:31:28.717844   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013
	I0815 17:31:28.717853   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 17:31:28.717860   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home/jenkins
	I0815 17:31:28.717866   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home
	I0815 17:31:28.717873   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Skipping /home - not owner
	I0815 17:31:28.717884   32399 main.go:141] libmachine: (ha-683878-m03) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines (perms=drwxr-xr-x)
	I0815 17:31:28.717893   32399 main.go:141] libmachine: (ha-683878-m03) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube (perms=drwxr-xr-x)
	I0815 17:31:28.717912   32399 main.go:141] libmachine: (ha-683878-m03) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013 (perms=drwxrwxr-x)
	I0815 17:31:28.717937   32399 main.go:141] libmachine: (ha-683878-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 17:31:28.717951   32399 main.go:141] libmachine: (ha-683878-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 17:31:28.717959   32399 main.go:141] libmachine: (ha-683878-m03) Creating domain...
	I0815 17:31:28.718766   32399 main.go:141] libmachine: (ha-683878-m03) define libvirt domain using xml: 
	I0815 17:31:28.718785   32399 main.go:141] libmachine: (ha-683878-m03) <domain type='kvm'>
	I0815 17:31:28.718795   32399 main.go:141] libmachine: (ha-683878-m03)   <name>ha-683878-m03</name>
	I0815 17:31:28.718803   32399 main.go:141] libmachine: (ha-683878-m03)   <memory unit='MiB'>2200</memory>
	I0815 17:31:28.718813   32399 main.go:141] libmachine: (ha-683878-m03)   <vcpu>2</vcpu>
	I0815 17:31:28.718825   32399 main.go:141] libmachine: (ha-683878-m03)   <features>
	I0815 17:31:28.718832   32399 main.go:141] libmachine: (ha-683878-m03)     <acpi/>
	I0815 17:31:28.718841   32399 main.go:141] libmachine: (ha-683878-m03)     <apic/>
	I0815 17:31:28.718849   32399 main.go:141] libmachine: (ha-683878-m03)     <pae/>
	I0815 17:31:28.718860   32399 main.go:141] libmachine: (ha-683878-m03)     
	I0815 17:31:28.718875   32399 main.go:141] libmachine: (ha-683878-m03)   </features>
	I0815 17:31:28.718885   32399 main.go:141] libmachine: (ha-683878-m03)   <cpu mode='host-passthrough'>
	I0815 17:31:28.718897   32399 main.go:141] libmachine: (ha-683878-m03)   
	I0815 17:31:28.718907   32399 main.go:141] libmachine: (ha-683878-m03)   </cpu>
	I0815 17:31:28.718919   32399 main.go:141] libmachine: (ha-683878-m03)   <os>
	I0815 17:31:28.718933   32399 main.go:141] libmachine: (ha-683878-m03)     <type>hvm</type>
	I0815 17:31:28.718945   32399 main.go:141] libmachine: (ha-683878-m03)     <boot dev='cdrom'/>
	I0815 17:31:28.718955   32399 main.go:141] libmachine: (ha-683878-m03)     <boot dev='hd'/>
	I0815 17:31:28.718963   32399 main.go:141] libmachine: (ha-683878-m03)     <bootmenu enable='no'/>
	I0815 17:31:28.718972   32399 main.go:141] libmachine: (ha-683878-m03)   </os>
	I0815 17:31:28.718981   32399 main.go:141] libmachine: (ha-683878-m03)   <devices>
	I0815 17:31:28.718991   32399 main.go:141] libmachine: (ha-683878-m03)     <disk type='file' device='cdrom'>
	I0815 17:31:28.719007   32399 main.go:141] libmachine: (ha-683878-m03)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/boot2docker.iso'/>
	I0815 17:31:28.719018   32399 main.go:141] libmachine: (ha-683878-m03)       <target dev='hdc' bus='scsi'/>
	I0815 17:31:28.719024   32399 main.go:141] libmachine: (ha-683878-m03)       <readonly/>
	I0815 17:31:28.719029   32399 main.go:141] libmachine: (ha-683878-m03)     </disk>
	I0815 17:31:28.719035   32399 main.go:141] libmachine: (ha-683878-m03)     <disk type='file' device='disk'>
	I0815 17:31:28.719043   32399 main.go:141] libmachine: (ha-683878-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 17:31:28.719051   32399 main.go:141] libmachine: (ha-683878-m03)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/ha-683878-m03.rawdisk'/>
	I0815 17:31:28.719058   32399 main.go:141] libmachine: (ha-683878-m03)       <target dev='hda' bus='virtio'/>
	I0815 17:31:28.719063   32399 main.go:141] libmachine: (ha-683878-m03)     </disk>
	I0815 17:31:28.719069   32399 main.go:141] libmachine: (ha-683878-m03)     <interface type='network'>
	I0815 17:31:28.719079   32399 main.go:141] libmachine: (ha-683878-m03)       <source network='mk-ha-683878'/>
	I0815 17:31:28.719086   32399 main.go:141] libmachine: (ha-683878-m03)       <model type='virtio'/>
	I0815 17:31:28.719112   32399 main.go:141] libmachine: (ha-683878-m03)     </interface>
	I0815 17:31:28.719135   32399 main.go:141] libmachine: (ha-683878-m03)     <interface type='network'>
	I0815 17:31:28.719151   32399 main.go:141] libmachine: (ha-683878-m03)       <source network='default'/>
	I0815 17:31:28.719162   32399 main.go:141] libmachine: (ha-683878-m03)       <model type='virtio'/>
	I0815 17:31:28.719172   32399 main.go:141] libmachine: (ha-683878-m03)     </interface>
	I0815 17:31:28.719179   32399 main.go:141] libmachine: (ha-683878-m03)     <serial type='pty'>
	I0815 17:31:28.719184   32399 main.go:141] libmachine: (ha-683878-m03)       <target port='0'/>
	I0815 17:31:28.719193   32399 main.go:141] libmachine: (ha-683878-m03)     </serial>
	I0815 17:31:28.719203   32399 main.go:141] libmachine: (ha-683878-m03)     <console type='pty'>
	I0815 17:31:28.719215   32399 main.go:141] libmachine: (ha-683878-m03)       <target type='serial' port='0'/>
	I0815 17:31:28.719224   32399 main.go:141] libmachine: (ha-683878-m03)     </console>
	I0815 17:31:28.719235   32399 main.go:141] libmachine: (ha-683878-m03)     <rng model='virtio'>
	I0815 17:31:28.719249   32399 main.go:141] libmachine: (ha-683878-m03)       <backend model='random'>/dev/random</backend>
	I0815 17:31:28.719270   32399 main.go:141] libmachine: (ha-683878-m03)     </rng>
	I0815 17:31:28.719280   32399 main.go:141] libmachine: (ha-683878-m03)     
	I0815 17:31:28.719288   32399 main.go:141] libmachine: (ha-683878-m03)     
	I0815 17:31:28.719297   32399 main.go:141] libmachine: (ha-683878-m03)   </devices>
	I0815 17:31:28.719304   32399 main.go:141] libmachine: (ha-683878-m03) </domain>
	I0815 17:31:28.719318   32399 main.go:141] libmachine: (ha-683878-m03) 
	I0815 17:31:28.725935   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3e:a2:c1 in network default
	I0815 17:31:28.726409   32399 main.go:141] libmachine: (ha-683878-m03) Ensuring networks are active...
	I0815 17:31:28.726427   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:28.727058   32399 main.go:141] libmachine: (ha-683878-m03) Ensuring network default is active
	I0815 17:31:28.727407   32399 main.go:141] libmachine: (ha-683878-m03) Ensuring network mk-ha-683878 is active
	I0815 17:31:28.727832   32399 main.go:141] libmachine: (ha-683878-m03) Getting domain xml...
	I0815 17:31:28.728606   32399 main.go:141] libmachine: (ha-683878-m03) Creating domain...
	I0815 17:31:29.950847   32399 main.go:141] libmachine: (ha-683878-m03) Waiting to get IP...
	I0815 17:31:29.951571   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:29.951964   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:29.951991   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:29.951939   33363 retry.go:31] will retry after 304.500308ms: waiting for machine to come up
	I0815 17:31:30.258371   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:30.258898   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:30.258927   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:30.258847   33363 retry.go:31] will retry after 370.386312ms: waiting for machine to come up
	I0815 17:31:30.630265   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:30.630695   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:30.630717   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:30.630659   33363 retry.go:31] will retry after 429.569597ms: waiting for machine to come up
	I0815 17:31:31.062207   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:31.062738   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:31.062761   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:31.062687   33363 retry.go:31] will retry after 501.692964ms: waiting for machine to come up
	I0815 17:31:31.566268   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:31.566720   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:31.566748   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:31.566659   33363 retry.go:31] will retry after 670.660701ms: waiting for machine to come up
	I0815 17:31:32.238594   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:32.239092   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:32.239118   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:32.239050   33363 retry.go:31] will retry after 896.312096ms: waiting for machine to come up
	I0815 17:31:33.136545   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:33.136915   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:33.136938   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:33.136887   33363 retry.go:31] will retry after 856.407541ms: waiting for machine to come up
	I0815 17:31:33.995449   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:33.995955   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:33.995983   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:33.995903   33363 retry.go:31] will retry after 1.414598205s: waiting for machine to come up
	I0815 17:31:35.412357   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:35.412827   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:35.412859   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:35.412773   33363 retry.go:31] will retry after 1.397444789s: waiting for machine to come up
	I0815 17:31:36.812422   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:36.812840   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:36.812861   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:36.812799   33363 retry.go:31] will retry after 1.619436816s: waiting for machine to come up
	I0815 17:31:38.434084   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:38.434588   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:38.434619   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:38.434529   33363 retry.go:31] will retry after 2.585895781s: waiting for machine to come up
	I0815 17:31:41.021583   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:41.021956   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:41.021986   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:41.021926   33363 retry.go:31] will retry after 3.434031626s: waiting for machine to come up
	I0815 17:31:44.457457   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:44.457897   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:44.457918   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:44.457864   33363 retry.go:31] will retry after 3.461619879s: waiting for machine to come up
	I0815 17:31:47.921569   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:47.922102   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:47.922136   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:47.921900   33363 retry.go:31] will retry after 5.053292471s: waiting for machine to come up
	I0815 17:31:52.978473   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:52.979031   32399 main.go:141] libmachine: (ha-683878-m03) Found IP for machine: 192.168.39.102
	I0815 17:31:52.979052   32399 main.go:141] libmachine: (ha-683878-m03) Reserving static IP address...
	I0815 17:31:52.979066   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has current primary IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:52.979552   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find host DHCP lease matching {name: "ha-683878-m03", mac: "52:54:00:3c:07:a9", ip: "192.168.39.102"} in network mk-ha-683878
	I0815 17:31:53.052883   32399 main.go:141] libmachine: (ha-683878-m03) Reserved static IP address: 192.168.39.102
	I0815 17:31:53.052915   32399 main.go:141] libmachine: (ha-683878-m03) Waiting for SSH to be available...
	I0815 17:31:53.052925   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Getting to WaitForSSH function...
	I0815 17:31:53.055559   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.055954   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.055985   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.056131   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Using SSH client type: external
	I0815 17:31:53.056160   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa (-rw-------)
	I0815 17:31:53.056881   32399 main.go:141] libmachine: (ha-683878-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 17:31:53.056905   32399 main.go:141] libmachine: (ha-683878-m03) DBG | About to run SSH command:
	I0815 17:31:53.056921   32399 main.go:141] libmachine: (ha-683878-m03) DBG | exit 0
	I0815 17:31:53.180785   32399 main.go:141] libmachine: (ha-683878-m03) DBG | SSH cmd err, output: <nil>: 
	I0815 17:31:53.181085   32399 main.go:141] libmachine: (ha-683878-m03) KVM machine creation complete!
	I0815 17:31:53.181456   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetConfigRaw
	I0815 17:31:53.182022   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:53.182220   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:53.182371   32399 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 17:31:53.182384   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetState
	I0815 17:31:53.183751   32399 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 17:31:53.183764   32399 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 17:31:53.183770   32399 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 17:31:53.183776   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.186394   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.186831   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.186867   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.187016   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:53.187167   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.187311   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.187459   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:53.187620   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:31:53.187807   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0815 17:31:53.187818   32399 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 17:31:53.291782   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:31:53.291806   32399 main.go:141] libmachine: Detecting the provisioner...
	I0815 17:31:53.291814   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.294620   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.294976   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.294997   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.295230   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:53.295406   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.295564   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.295699   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:53.295846   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:31:53.296019   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0815 17:31:53.296032   32399 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 17:31:53.397359   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 17:31:53.397479   32399 main.go:141] libmachine: found compatible host: buildroot
	I0815 17:31:53.397494   32399 main.go:141] libmachine: Provisioning with buildroot...
	I0815 17:31:53.397508   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetMachineName
	I0815 17:31:53.397759   32399 buildroot.go:166] provisioning hostname "ha-683878-m03"
	I0815 17:31:53.397785   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetMachineName
	I0815 17:31:53.397957   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.400696   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.401105   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.401135   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.401295   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:53.401479   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.401639   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.401789   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:53.401954   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:31:53.402119   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0815 17:31:53.402135   32399 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683878-m03 && echo "ha-683878-m03" | sudo tee /etc/hostname
	I0815 17:31:53.518924   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683878-m03
	
	I0815 17:31:53.518949   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.521720   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.522053   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.522074   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.522242   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:53.522435   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.522619   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.522759   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:53.522909   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:31:53.523077   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0815 17:31:53.523099   32399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683878-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683878-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683878-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:31:53.633947   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:31:53.633976   32399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 17:31:53.633995   32399 buildroot.go:174] setting up certificates
	I0815 17:31:53.634007   32399 provision.go:84] configureAuth start
	I0815 17:31:53.634020   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetMachineName
	I0815 17:31:53.634315   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:31:53.636975   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.637357   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.637386   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.637487   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.639565   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.640038   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.640061   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.640234   32399 provision.go:143] copyHostCerts
	I0815 17:31:53.640261   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:31:53.640297   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 17:31:53.640309   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:31:53.640387   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 17:31:53.640520   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:31:53.640554   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 17:31:53.640560   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:31:53.640588   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 17:31:53.640648   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:31:53.640669   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 17:31:53.640678   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:31:53.640712   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 17:31:53.640776   32399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.ha-683878-m03 san=[127.0.0.1 192.168.39.102 ha-683878-m03 localhost minikube]
	I0815 17:31:53.750181   32399 provision.go:177] copyRemoteCerts
	I0815 17:31:53.750238   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:31:53.750261   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.752842   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.753275   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.753304   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.753444   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:53.753617   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.753740   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:53.753856   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:31:53.834774   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 17:31:53.834875   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 17:31:53.859383   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 17:31:53.859457   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 17:31:53.885113   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 17:31:53.885196   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 17:31:53.909108   32399 provision.go:87] duration metric: took 275.089302ms to configureAuth
	I0815 17:31:53.909132   32399 buildroot.go:189] setting minikube options for container-runtime
	I0815 17:31:53.909347   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:31:53.909436   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.912274   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.912683   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.912709   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.912871   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:53.913055   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.913203   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.913334   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:53.913469   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:31:53.913616   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0815 17:31:53.913631   32399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:31:54.173348   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:31:54.173375   32399 main.go:141] libmachine: Checking connection to Docker...
	I0815 17:31:54.173385   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetURL
	I0815 17:31:54.174751   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Using libvirt version 6000000
	I0815 17:31:54.176993   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.177277   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.177303   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.177538   32399 main.go:141] libmachine: Docker is up and running!
	I0815 17:31:54.177554   32399 main.go:141] libmachine: Reticulating splines...
	I0815 17:31:54.177561   32399 client.go:171] duration metric: took 25.768881471s to LocalClient.Create
	I0815 17:31:54.177582   32399 start.go:167] duration metric: took 25.768939477s to libmachine.API.Create "ha-683878"
	I0815 17:31:54.177593   32399 start.go:293] postStartSetup for "ha-683878-m03" (driver="kvm2")
	I0815 17:31:54.177606   32399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:31:54.177624   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:54.177846   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:31:54.177868   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:54.180005   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.180380   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.180408   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.180562   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:54.180722   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:54.180883   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:54.181027   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:31:54.258943   32399 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:31:54.263288   32399 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 17:31:54.263313   32399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 17:31:54.263385   32399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 17:31:54.263498   32399 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 17:31:54.263509   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /etc/ssl/certs/202192.pem
	I0815 17:31:54.263613   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:31:54.272991   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:31:54.297358   32399 start.go:296] duration metric: took 119.753559ms for postStartSetup
	I0815 17:31:54.297418   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetConfigRaw
	I0815 17:31:54.297961   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:31:54.300667   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.301051   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.301088   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.301327   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:31:54.301514   32399 start.go:128] duration metric: took 25.911150347s to createHost
	I0815 17:31:54.301539   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:54.303671   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.304033   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.304061   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.304193   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:54.304352   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:54.304570   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:54.304720   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:54.304925   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:31:54.305111   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0815 17:31:54.305126   32399 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 17:31:54.405817   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723743114.383134289
	
	I0815 17:31:54.405839   32399 fix.go:216] guest clock: 1723743114.383134289
	I0815 17:31:54.405849   32399 fix.go:229] Guest: 2024-08-15 17:31:54.383134289 +0000 UTC Remote: 2024-08-15 17:31:54.30152525 +0000 UTC m=+199.534419910 (delta=81.609039ms)
	I0815 17:31:54.405867   32399 fix.go:200] guest clock delta is within tolerance: 81.609039ms
	I0815 17:31:54.405873   32399 start.go:83] releasing machines lock for "ha-683878-m03", held for 26.015614375s
	I0815 17:31:54.405902   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:54.406141   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:31:54.408440   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.408787   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.408820   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.410739   32399 out.go:177] * Found network options:
	I0815 17:31:54.411976   32399 out.go:177]   - NO_PROXY=192.168.39.17,192.168.39.232
	W0815 17:31:54.413078   32399 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 17:31:54.413103   32399 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 17:31:54.413132   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:54.413584   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:54.413723   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:54.413829   32399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:31:54.413866   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	W0815 17:31:54.413943   32399 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 17:31:54.413971   32399 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 17:31:54.414031   32399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:31:54.414051   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:54.416376   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.416579   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.416776   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.416803   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.416945   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:54.416966   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.416989   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.417100   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:54.417164   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:54.417261   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:54.417335   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:54.417403   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:31:54.417439   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:54.417556   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:31:54.643699   32399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 17:31:54.649737   32399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 17:31:54.649805   32399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:31:54.669695   32399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 17:31:54.669719   32399 start.go:495] detecting cgroup driver to use...
	I0815 17:31:54.669781   32399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:31:54.689200   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:31:54.705716   32399 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:31:54.705767   32399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:31:54.721518   32399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:31:54.737193   32399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:31:54.878133   32399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:31:55.043938   32399 docker.go:233] disabling docker service ...
	I0815 17:31:55.044009   32399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:31:55.057741   32399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:31:55.070816   32399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:31:55.190566   32399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:31:55.301710   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:31:55.314980   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:31:55.333061   32399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:31:55.333158   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.343340   32399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:31:55.343408   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.353288   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.363495   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.374357   32399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:31:55.384672   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.394992   32399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.412506   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.422696   32399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:31:55.432113   32399 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 17:31:55.432161   32399 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 17:31:55.444560   32399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:31:55.453428   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:31:55.596823   32399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:31:55.735933   32399 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:31:55.736005   32399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:31:55.740912   32399 start.go:563] Will wait 60s for crictl version
	I0815 17:31:55.740966   32399 ssh_runner.go:195] Run: which crictl
	I0815 17:31:55.744555   32399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:31:55.781290   32399 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 17:31:55.781354   32399 ssh_runner.go:195] Run: crio --version
	I0815 17:31:55.808725   32399 ssh_runner.go:195] Run: crio --version
	I0815 17:31:55.837300   32399 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 17:31:55.838873   32399 out.go:177]   - env NO_PROXY=192.168.39.17
	I0815 17:31:55.840255   32399 out.go:177]   - env NO_PROXY=192.168.39.17,192.168.39.232
	I0815 17:31:55.841437   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:31:55.844175   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:55.844551   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:55.844574   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:55.844808   32399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 17:31:55.848978   32399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:31:55.861180   32399 mustload.go:65] Loading cluster: ha-683878
	I0815 17:31:55.861433   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:31:55.861784   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:31:55.861826   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:31:55.876124   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0815 17:31:55.876509   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:31:55.876942   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:31:55.876959   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:31:55.877267   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:31:55.877438   32399 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:31:55.879049   32399 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:31:55.879368   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:31:55.879402   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:31:55.895207   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0815 17:31:55.895642   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:31:55.896119   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:31:55.896144   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:31:55.896465   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:31:55.896631   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:31:55.896784   32399 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878 for IP: 192.168.39.102
	I0815 17:31:55.896800   32399 certs.go:194] generating shared ca certs ...
	I0815 17:31:55.896817   32399 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:31:55.896930   32399 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 17:31:55.896964   32399 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 17:31:55.896973   32399 certs.go:256] generating profile certs ...
	I0815 17:31:55.897039   32399 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key
	I0815 17:31:55.897062   32399 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.79bf3ced
	I0815 17:31:55.897075   32399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.79bf3ced with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.17 192.168.39.232 192.168.39.102 192.168.39.254]
	I0815 17:31:55.960572   32399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.79bf3ced ...
	I0815 17:31:55.960600   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.79bf3ced: {Name:mk99fa0b5f620c685341a21e4bc78e62e9b202fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:31:55.960752   32399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.79bf3ced ...
	I0815 17:31:55.960763   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.79bf3ced: {Name:mk311eb5add21f571a8af06cc429c9bc098bb06b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:31:55.960834   32399 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.79bf3ced -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt
	I0815 17:31:55.960954   32399 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.79bf3ced -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key
	I0815 17:31:55.961094   32399 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key
	I0815 17:31:55.961108   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 17:31:55.961126   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 17:31:55.961140   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 17:31:55.961158   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 17:31:55.961170   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 17:31:55.961183   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 17:31:55.961194   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 17:31:55.961205   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 17:31:55.961256   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 17:31:55.961284   32399 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 17:31:55.961293   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 17:31:55.961317   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 17:31:55.961337   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:31:55.961357   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 17:31:55.961398   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:31:55.961429   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /usr/share/ca-certificates/202192.pem
	I0815 17:31:55.961447   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:31:55.961459   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem -> /usr/share/ca-certificates/20219.pem
	I0815 17:31:55.961487   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:31:55.964187   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:31:55.964579   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:31:55.964605   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:31:55.964783   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:31:55.964979   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:31:55.965114   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:31:55.965284   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:31:56.036845   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 17:31:56.041500   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 17:31:56.052745   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 17:31:56.056704   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 17:31:56.067039   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 17:31:56.071383   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 17:31:56.087715   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 17:31:56.092010   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 17:31:56.109688   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 17:31:56.115532   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 17:31:56.128033   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 17:31:56.132432   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 17:31:56.145389   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:31:56.174587   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:31:56.207751   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:31:56.235127   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 17:31:56.261869   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0815 17:31:56.286224   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 17:31:56.310536   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:31:56.333198   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:31:56.356534   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 17:31:56.380651   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:31:56.403739   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 17:31:56.426359   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 17:31:56.443642   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 17:31:56.459514   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 17:31:56.476982   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 17:31:56.493955   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 17:31:56.509776   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 17:31:56.525475   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 17:31:56.541462   32399 ssh_runner.go:195] Run: openssl version
	I0815 17:31:56.546979   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 17:31:56.556842   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 17:31:56.561162   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 17:31:56.561207   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 17:31:56.566795   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:31:56.577387   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:31:56.587983   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:31:56.592183   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:31:56.592226   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:31:56.598278   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:31:56.608642   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 17:31:56.619243   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 17:31:56.623720   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 17:31:56.623768   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 17:31:56.629307   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 17:31:56.639663   32399 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:31:56.643786   32399 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 17:31:56.643841   32399 kubeadm.go:934] updating node {m03 192.168.39.102 8443 v1.31.0 crio true true} ...
	I0815 17:31:56.643923   32399 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683878-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:31:56.643948   32399 kube-vip.go:115] generating kube-vip config ...
	I0815 17:31:56.643973   32399 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 17:31:56.658907   32399 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 17:31:56.658960   32399 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 17:31:56.658997   32399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:31:56.669211   32399 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0815 17:31:56.669251   32399 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0815 17:31:56.678795   32399 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0815 17:31:56.678795   32399 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0815 17:31:56.678822   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 17:31:56.678858   32399 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0815 17:31:56.678879   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 17:31:56.678910   32399 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 17:31:56.678946   32399 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 17:31:56.678879   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:31:56.688360   32399 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0815 17:31:56.688385   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0815 17:31:56.688776   32399 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0815 17:31:56.688791   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0815 17:31:56.702959   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 17:31:56.703073   32399 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 17:31:56.806479   32399 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0815 17:31:56.806520   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0815 17:31:57.547743   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 17:31:57.558405   32399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0815 17:31:57.575793   32399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:31:57.593950   32399 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 17:31:57.610985   32399 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 17:31:57.614996   32399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:31:57.627994   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:31:57.761190   32399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:31:57.778062   32399 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:31:57.778508   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:31:57.778553   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:31:57.793848   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0815 17:31:57.794348   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:31:57.794859   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:31:57.794879   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:31:57.795374   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:31:57.795570   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:31:57.795722   32399 start.go:317] joinCluster: &{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:31:57.795841   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0815 17:31:57.795863   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:31:57.799015   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:31:57.799408   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:31:57.799447   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:31:57.799520   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:31:57.799702   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:31:57.799865   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:31:57.799975   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:31:57.949722   32399 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:31:57.949774   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01d3mu.y2f8jenobaipuomd --discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683878-m03 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443"
	I0815 17:32:21.061931   32399 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01d3mu.y2f8jenobaipuomd --discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683878-m03 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443": (23.112127181s)
	I0815 17:32:21.061975   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0815 17:32:21.557613   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-683878-m03 minikube.k8s.io/updated_at=2024_08_15T17_32_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=ha-683878 minikube.k8s.io/primary=false
	I0815 17:32:21.681719   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-683878-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0815 17:32:21.805600   32399 start.go:319] duration metric: took 24.009873883s to joinCluster
	I0815 17:32:21.805670   32399 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:32:21.806123   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:32:21.807232   32399 out.go:177] * Verifying Kubernetes components...
	I0815 17:32:21.808598   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:32:22.076040   32399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:32:22.173020   32399 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:32:22.173238   32399 kapi.go:59] client config for ha-683878: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.crt", KeyFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key", CAFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 17:32:22.173293   32399 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.17:8443
	I0815 17:32:22.173499   32399 node_ready.go:35] waiting up to 6m0s for node "ha-683878-m03" to be "Ready" ...
	I0815 17:32:22.173577   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:22.173584   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:22.173592   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:22.173597   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:22.176897   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:22.673973   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:22.673996   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:22.674007   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:22.674012   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:22.677618   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:23.174264   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:23.174291   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:23.174302   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:23.174306   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:23.177438   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:23.674740   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:23.674766   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:23.674778   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:23.674784   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:23.682038   32399 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 17:32:24.173830   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:24.173852   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:24.173860   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:24.173864   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:24.177131   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:24.177861   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:24.673796   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:24.673819   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:24.673827   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:24.673831   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:24.677195   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:25.174368   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:25.174387   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:25.174396   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:25.174400   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:25.183660   32399 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0815 17:32:25.673777   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:25.673796   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:25.673804   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:25.673807   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:25.677326   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:26.173858   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:26.173879   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:26.173887   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:26.173892   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:26.177125   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:26.674647   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:26.674669   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:26.674680   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:26.674685   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:26.677932   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:26.678662   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:27.174495   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:27.174516   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:27.174524   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:27.174528   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:27.177834   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:27.673799   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:27.673819   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:27.673827   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:27.673830   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:27.676845   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:28.174280   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:28.174302   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:28.174309   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:28.174312   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:28.177826   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:28.673925   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:28.673952   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:28.673964   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:28.673970   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:28.677210   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:29.173938   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:29.173957   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:29.173965   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:29.173971   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:29.177003   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:29.177664   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:29.674034   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:29.674060   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:29.674072   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:29.674077   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:29.676942   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:30.174001   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:30.174028   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:30.174042   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:30.174047   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:30.177454   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:30.674403   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:30.674429   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:30.674437   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:30.674441   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:30.677462   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:31.174102   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:31.174126   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:31.174135   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:31.174138   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:31.176996   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:31.177822   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:31.674178   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:31.674205   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:31.674216   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:31.674222   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:31.677546   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:32.174085   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:32.174112   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:32.174123   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:32.174129   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:32.177640   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:32.674633   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:32.674659   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:32.674672   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:32.674678   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:32.678233   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:33.174528   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:33.174553   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:33.174562   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:33.174569   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:33.177783   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:33.178261   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:33.674150   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:33.674173   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:33.674183   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:33.674188   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:33.677506   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:34.174304   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:34.174326   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:34.174332   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:34.174337   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:34.177467   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:34.674560   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:34.674582   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:34.674590   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:34.674596   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:34.678094   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:35.174310   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:35.174335   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:35.174345   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:35.174350   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:35.177605   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:35.674579   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:35.674598   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:35.674607   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:35.674611   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:35.678147   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:35.678994   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:36.174231   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:36.174252   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:36.174260   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:36.174264   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:36.177589   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:36.674574   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:36.674596   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:36.674604   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:36.674609   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:36.678004   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:37.174347   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:37.174370   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:37.174381   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:37.174388   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:37.177647   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:37.673743   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:37.673764   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:37.673772   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:37.673777   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:37.676724   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:38.174338   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:38.174360   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:38.174368   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:38.174372   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:38.178127   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:38.178726   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:38.674525   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:38.674548   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:38.674559   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:38.674566   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:38.677778   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:39.174657   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:39.174683   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:39.174696   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:39.174703   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:39.177931   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:39.674704   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:39.674729   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:39.674741   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:39.674748   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:39.678143   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:40.174418   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:40.174440   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:40.174448   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:40.174452   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:40.177712   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:40.674640   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:40.674661   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:40.674670   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:40.674674   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:40.677879   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:40.678783   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:41.174250   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:41.174270   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.174278   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.174283   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.177452   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:41.178159   32399 node_ready.go:49] node "ha-683878-m03" has status "Ready":"True"
	I0815 17:32:41.178180   32399 node_ready.go:38] duration metric: took 19.00466153s for node "ha-683878-m03" to be "Ready" ...
	I0815 17:32:41.178191   32399 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:32:41.178269   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:32:41.178282   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.178291   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.178295   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.185480   32399 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 17:32:41.194636   32399 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-c5mlj" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.194737   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-c5mlj
	I0815 17:32:41.194748   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.194760   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.194773   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.200799   32399 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 17:32:41.201737   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:41.201751   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.201759   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.201762   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.204079   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.204643   32399 pod_ready.go:93] pod "coredns-6f6b679f8f-c5mlj" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:41.204667   32399 pod_ready.go:82] duration metric: took 10.00508ms for pod "coredns-6f6b679f8f-c5mlj" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.204681   32399 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-kfczp" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.204747   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-kfczp
	I0815 17:32:41.204758   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.204767   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.204778   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.207460   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.208013   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:41.208027   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.208033   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.208037   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.210374   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.210862   32399 pod_ready.go:93] pod "coredns-6f6b679f8f-kfczp" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:41.210876   32399 pod_ready.go:82] duration metric: took 6.18734ms for pod "coredns-6f6b679f8f-kfczp" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.210885   32399 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.210930   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683878
	I0815 17:32:41.210939   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.210948   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.210956   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.213116   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.213720   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:41.213733   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.213740   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.213743   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.216319   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.217149   32399 pod_ready.go:93] pod "etcd-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:41.217165   32399 pod_ready.go:82] duration metric: took 6.274422ms for pod "etcd-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.217173   32399 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.217219   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683878-m02
	I0815 17:32:41.217226   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.217233   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.217238   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.219588   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.220341   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:41.220357   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.220367   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.220372   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.222638   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.223154   32399 pod_ready.go:93] pod "etcd-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:41.223172   32399 pod_ready.go:82] duration metric: took 5.990647ms for pod "etcd-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.223183   32399 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.374524   32399 request.go:632] Waited for 151.285572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683878-m03
	I0815 17:32:41.374582   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683878-m03
	I0815 17:32:41.374587   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.374594   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.374599   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.377348   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.574281   32399 request.go:632] Waited for 196.280265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:41.574338   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:41.574343   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.574350   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.574354   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.577446   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:41.577906   32399 pod_ready.go:93] pod "etcd-ha-683878-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:41.577921   32399 pod_ready.go:82] duration metric: took 354.73017ms for pod "etcd-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.577938   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.775204   32399 request.go:632] Waited for 197.209512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878
	I0815 17:32:41.775274   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878
	I0815 17:32:41.775281   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.775288   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.775295   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.778946   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:41.975063   32399 request.go:632] Waited for 195.363549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:41.975132   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:41.975143   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.975155   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.975164   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.978691   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:41.979300   32399 pod_ready.go:93] pod "kube-apiserver-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:41.979316   32399 pod_ready.go:82] duration metric: took 401.371948ms for pod "kube-apiserver-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.979325   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:42.174789   32399 request.go:632] Waited for 195.405615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878-m02
	I0815 17:32:42.174852   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878-m02
	I0815 17:32:42.174857   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:42.174864   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:42.174868   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:42.178064   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:42.375253   32399 request.go:632] Waited for 196.415171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:42.375320   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:42.375330   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:42.375341   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:42.375345   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:42.378731   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:42.379201   32399 pod_ready.go:93] pod "kube-apiserver-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:42.379220   32399 pod_ready.go:82] duration metric: took 399.888478ms for pod "kube-apiserver-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:42.379232   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:42.574296   32399 request.go:632] Waited for 194.992186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878-m03
	I0815 17:32:42.574362   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878-m03
	I0815 17:32:42.574367   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:42.574374   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:42.574378   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:42.578084   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:42.775187   32399 request.go:632] Waited for 196.347179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:42.775235   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:42.775244   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:42.775252   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:42.775257   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:42.778291   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:42.778885   32399 pod_ready.go:93] pod "kube-apiserver-ha-683878-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:42.778903   32399 pod_ready.go:82] duration metric: took 399.66364ms for pod "kube-apiserver-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:42.778912   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:42.974310   32399 request.go:632] Waited for 195.305249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878
	I0815 17:32:42.974369   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878
	I0815 17:32:42.974377   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:42.974388   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:42.974395   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:42.977810   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:43.174987   32399 request.go:632] Waited for 196.373013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:43.175054   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:43.175060   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:43.175067   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:43.175071   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:43.177986   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:43.178590   32399 pod_ready.go:93] pod "kube-controller-manager-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:43.178608   32399 pod_ready.go:82] duration metric: took 399.690127ms for pod "kube-controller-manager-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:43.178618   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:43.374684   32399 request.go:632] Waited for 195.978252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878-m02
	I0815 17:32:43.374733   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878-m02
	I0815 17:32:43.374738   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:43.374746   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:43.374750   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:43.378406   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:43.574421   32399 request.go:632] Waited for 195.312861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:43.574472   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:43.574477   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:43.574486   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:43.574491   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:43.577649   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:43.578447   32399 pod_ready.go:93] pod "kube-controller-manager-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:43.578470   32399 pod_ready.go:82] duration metric: took 399.832761ms for pod "kube-controller-manager-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:43.578486   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:43.774568   32399 request.go:632] Waited for 196.016436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878-m03
	I0815 17:32:43.774649   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878-m03
	I0815 17:32:43.774656   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:43.774664   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:43.774669   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:43.778046   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:43.974626   32399 request.go:632] Waited for 195.821317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:43.974693   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:43.974698   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:43.974705   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:43.974710   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:43.978245   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:43.978820   32399 pod_ready.go:93] pod "kube-controller-manager-ha-683878-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:43.978848   32399 pod_ready.go:82] duration metric: took 400.353646ms for pod "kube-controller-manager-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:43.978863   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-89p4v" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:44.174830   32399 request.go:632] Waited for 195.889616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89p4v
	I0815 17:32:44.174914   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89p4v
	I0815 17:32:44.174925   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:44.174933   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:44.174939   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:44.178234   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:44.375251   32399 request.go:632] Waited for 196.352467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:44.375309   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:44.375314   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:44.375321   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:44.375325   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:44.378310   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:44.379019   32399 pod_ready.go:93] pod "kube-proxy-89p4v" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:44.379040   32399 pod_ready.go:82] duration metric: took 400.166256ms for pod "kube-proxy-89p4v" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:44.379052   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8bp98" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:44.575166   32399 request.go:632] Waited for 196.047647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bp98
	I0815 17:32:44.575235   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bp98
	I0815 17:32:44.575243   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:44.575253   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:44.575262   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:44.578454   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:44.774652   32399 request.go:632] Waited for 195.35787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:44.774707   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:44.774712   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:44.774720   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:44.774723   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:44.777575   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:44.778134   32399 pod_ready.go:93] pod "kube-proxy-8bp98" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:44.778152   32399 pod_ready.go:82] duration metric: took 399.092736ms for pod "kube-proxy-8bp98" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:44.778162   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s9hw4" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:44.974958   32399 request.go:632] Waited for 196.713091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9hw4
	I0815 17:32:44.975028   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9hw4
	I0815 17:32:44.975035   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:44.975045   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:44.975054   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:44.978400   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:45.174573   32399 request.go:632] Waited for 195.222828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:45.174689   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:45.174704   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:45.174714   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:45.174721   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:45.178336   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:45.178954   32399 pod_ready.go:93] pod "kube-proxy-s9hw4" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:45.178980   32399 pod_ready.go:82] duration metric: took 400.811627ms for pod "kube-proxy-s9hw4" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:45.178995   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:45.375048   32399 request.go:632] Waited for 195.962331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878
	I0815 17:32:45.375123   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878
	I0815 17:32:45.375128   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:45.375136   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:45.375140   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:45.378524   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:45.574451   32399 request.go:632] Waited for 195.265569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:45.574519   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:45.574524   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:45.574531   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:45.574536   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:45.577566   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:45.578090   32399 pod_ready.go:93] pod "kube-scheduler-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:45.578107   32399 pod_ready.go:82] duration metric: took 399.104498ms for pod "kube-scheduler-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:45.578119   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:45.775273   32399 request.go:632] Waited for 197.08497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878-m02
	I0815 17:32:45.775354   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878-m02
	I0815 17:32:45.775361   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:45.775368   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:45.775376   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:45.778426   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:45.974866   32399 request.go:632] Waited for 195.970601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:45.974917   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:45.974923   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:45.974930   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:45.974941   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:45.977926   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:45.978390   32399 pod_ready.go:93] pod "kube-scheduler-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:45.978407   32399 pod_ready.go:82] duration metric: took 400.28082ms for pod "kube-scheduler-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:45.978417   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:46.174534   32399 request.go:632] Waited for 196.052755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878-m03
	I0815 17:32:46.174627   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878-m03
	I0815 17:32:46.174639   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:46.174650   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:46.174658   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:46.177715   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:46.374808   32399 request.go:632] Waited for 196.339932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:46.374863   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:46.374870   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:46.374878   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:46.374888   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:46.378435   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:46.379191   32399 pod_ready.go:93] pod "kube-scheduler-ha-683878-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:46.379206   32399 pod_ready.go:82] duration metric: took 400.783564ms for pod "kube-scheduler-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:46.379215   32399 pod_ready.go:39] duration metric: took 5.201008555s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:32:46.379231   32399 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:32:46.379286   32399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:32:46.398250   32399 api_server.go:72] duration metric: took 24.592549351s to wait for apiserver process to appear ...
	I0815 17:32:46.398276   32399 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:32:46.398297   32399 api_server.go:253] Checking apiserver healthz at https://192.168.39.17:8443/healthz ...
	I0815 17:32:46.405902   32399 api_server.go:279] https://192.168.39.17:8443/healthz returned 200:
	ok
	I0815 17:32:46.405978   32399 round_trippers.go:463] GET https://192.168.39.17:8443/version
	I0815 17:32:46.405988   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:46.406001   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:46.406012   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:46.406873   32399 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 17:32:46.406947   32399 api_server.go:141] control plane version: v1.31.0
	I0815 17:32:46.406960   32399 api_server.go:131] duration metric: took 8.676545ms to wait for apiserver health ...
	I0815 17:32:46.406971   32399 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 17:32:46.575323   32399 request.go:632] Waited for 168.273095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:32:46.575399   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:32:46.575407   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:46.575416   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:46.575422   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:46.591852   32399 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0815 17:32:46.599397   32399 system_pods.go:59] 24 kube-system pods found
	I0815 17:32:46.599422   32399 system_pods.go:61] "coredns-6f6b679f8f-c5mlj" [24146559-ea1d-42db-9f61-730ed436dea8] Running
	I0815 17:32:46.599427   32399 system_pods.go:61] "coredns-6f6b679f8f-kfczp" [5d18cfeb-ccfe-4432-b999-510d84438c7a] Running
	I0815 17:32:46.599430   32399 system_pods.go:61] "etcd-ha-683878" [89164a36-1867-4d3e-8b16-4b6e3f5735d9] Running
	I0815 17:32:46.599434   32399 system_pods.go:61] "etcd-ha-683878-m02" [ffd47718-50f2-42b0-8759-390d981a69b8] Running
	I0815 17:32:46.599437   32399 system_pods.go:61] "etcd-ha-683878-m03" [0d49fecb-c4ae-4f81-94e3-1042caeb1d6e] Running
	I0815 17:32:46.599441   32399 system_pods.go:61] "kindnet-6bccr" [43768eb8-6f4d-443f-afd5-af43e96556a1] Running
	I0815 17:32:46.599446   32399 system_pods.go:61] "kindnet-g8lqf" [bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e] Running
	I0815 17:32:46.599451   32399 system_pods.go:61] "kindnet-z5z9h" [525522f9-4aef-49ae-9f3f-02960fe82bff] Running
	I0815 17:32:46.599455   32399 system_pods.go:61] "kube-apiserver-ha-683878" [265e1832-cd30-4ba1-9aa5-5e18cd71e8f0] Running
	I0815 17:32:46.599460   32399 system_pods.go:61] "kube-apiserver-ha-683878-m02" [bff6c9d5-5c64-4220-9a17-f3f08b8e5dab] Running
	I0815 17:32:46.599469   32399 system_pods.go:61] "kube-apiserver-ha-683878-m03" [a39a5463-47e0-4a1e-bad5-dca1544c5a3a] Running
	I0815 17:32:46.599474   32399 system_pods.go:61] "kube-controller-manager-ha-683878" [e958c9a5-cf23-4d1a-bf25-ab03393607cb] Running
	I0815 17:32:46.599479   32399 system_pods.go:61] "kube-controller-manager-ha-683878-m02" [fa5ae940-8a2a-4a4c-950c-5fe267cddc2d] Running
	I0815 17:32:46.599487   32399 system_pods.go:61] "kube-controller-manager-ha-683878-m03" [9352fe4c-bc08-4fc3-b001-e34c7b434253] Running
	I0815 17:32:46.599493   32399 system_pods.go:61] "kube-proxy-89p4v" [58c774bf-7b9a-46ad-8d85-81df9b68415a] Running
	I0815 17:32:46.599500   32399 system_pods.go:61] "kube-proxy-8bp98" [009b24bb-3d29-4ba6-b18f-0694f7479636] Running
	I0815 17:32:46.599504   32399 system_pods.go:61] "kube-proxy-s9hw4" [f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1] Running
	I0815 17:32:46.599510   32399 system_pods.go:61] "kube-scheduler-ha-683878" [fe51d20e-6174-48c9-b170-2eff952a4975] Running
	I0815 17:32:46.599513   32399 system_pods.go:61] "kube-scheduler-ha-683878-m02" [bb94ccf5-231f-4bb5-903d-8664be14bc58] Running
	I0815 17:32:46.599519   32399 system_pods.go:61] "kube-scheduler-ha-683878-m03" [1738390e-8c78-48b7-b2cd-3beb5df2cbeb] Running
	I0815 17:32:46.599522   32399 system_pods.go:61] "kube-vip-ha-683878" [9c4a5acc-022d-4756-a0c4-6a867b22f0bb] Running
	I0815 17:32:46.599525   32399 system_pods.go:61] "kube-vip-ha-683878-m02" [041e7349-ab7d-4b80-9f0d-ea92f61d637b] Running
	I0815 17:32:46.599528   32399 system_pods.go:61] "kube-vip-ha-683878-m03" [4092675a-3aac-4e04-b507-c5434f0e3f1c] Running
	I0815 17:32:46.599531   32399 system_pods.go:61] "storage-provisioner" [78d884cc-a5c3-4f94-b643-b6593cb3f622] Running
	I0815 17:32:46.599537   32399 system_pods.go:74] duration metric: took 192.559759ms to wait for pod list to return data ...
	I0815 17:32:46.599547   32399 default_sa.go:34] waiting for default service account to be created ...
	I0815 17:32:46.774966   32399 request.go:632] Waited for 175.342628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/default/serviceaccounts
	I0815 17:32:46.775030   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/default/serviceaccounts
	I0815 17:32:46.775038   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:46.775049   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:46.775060   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:46.779252   32399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 17:32:46.779365   32399 default_sa.go:45] found service account: "default"
	I0815 17:32:46.779379   32399 default_sa.go:55] duration metric: took 179.826969ms for default service account to be created ...
	I0815 17:32:46.779387   32399 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 17:32:46.974726   32399 request.go:632] Waited for 195.258635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:32:46.974801   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:32:46.974807   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:46.974816   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:46.974824   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:46.980532   32399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 17:32:46.987356   32399 system_pods.go:86] 24 kube-system pods found
	I0815 17:32:46.987387   32399 system_pods.go:89] "coredns-6f6b679f8f-c5mlj" [24146559-ea1d-42db-9f61-730ed436dea8] Running
	I0815 17:32:46.987392   32399 system_pods.go:89] "coredns-6f6b679f8f-kfczp" [5d18cfeb-ccfe-4432-b999-510d84438c7a] Running
	I0815 17:32:46.987397   32399 system_pods.go:89] "etcd-ha-683878" [89164a36-1867-4d3e-8b16-4b6e3f5735d9] Running
	I0815 17:32:46.987401   32399 system_pods.go:89] "etcd-ha-683878-m02" [ffd47718-50f2-42b0-8759-390d981a69b8] Running
	I0815 17:32:46.987405   32399 system_pods.go:89] "etcd-ha-683878-m03" [0d49fecb-c4ae-4f81-94e3-1042caeb1d6e] Running
	I0815 17:32:46.987408   32399 system_pods.go:89] "kindnet-6bccr" [43768eb8-6f4d-443f-afd5-af43e96556a1] Running
	I0815 17:32:46.987412   32399 system_pods.go:89] "kindnet-g8lqf" [bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e] Running
	I0815 17:32:46.987415   32399 system_pods.go:89] "kindnet-z5z9h" [525522f9-4aef-49ae-9f3f-02960fe82bff] Running
	I0815 17:32:46.987419   32399 system_pods.go:89] "kube-apiserver-ha-683878" [265e1832-cd30-4ba1-9aa5-5e18cd71e8f0] Running
	I0815 17:32:46.987422   32399 system_pods.go:89] "kube-apiserver-ha-683878-m02" [bff6c9d5-5c64-4220-9a17-f3f08b8e5dab] Running
	I0815 17:32:46.987425   32399 system_pods.go:89] "kube-apiserver-ha-683878-m03" [a39a5463-47e0-4a1e-bad5-dca1544c5a3a] Running
	I0815 17:32:46.987430   32399 system_pods.go:89] "kube-controller-manager-ha-683878" [e958c9a5-cf23-4d1a-bf25-ab03393607cb] Running
	I0815 17:32:46.987435   32399 system_pods.go:89] "kube-controller-manager-ha-683878-m02" [fa5ae940-8a2a-4a4c-950c-5fe267cddc2d] Running
	I0815 17:32:46.987438   32399 system_pods.go:89] "kube-controller-manager-ha-683878-m03" [9352fe4c-bc08-4fc3-b001-e34c7b434253] Running
	I0815 17:32:46.987441   32399 system_pods.go:89] "kube-proxy-89p4v" [58c774bf-7b9a-46ad-8d85-81df9b68415a] Running
	I0815 17:32:46.987446   32399 system_pods.go:89] "kube-proxy-8bp98" [009b24bb-3d29-4ba6-b18f-0694f7479636] Running
	I0815 17:32:46.987449   32399 system_pods.go:89] "kube-proxy-s9hw4" [f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1] Running
	I0815 17:32:46.987453   32399 system_pods.go:89] "kube-scheduler-ha-683878" [fe51d20e-6174-48c9-b170-2eff952a4975] Running
	I0815 17:32:46.987456   32399 system_pods.go:89] "kube-scheduler-ha-683878-m02" [bb94ccf5-231f-4bb5-903d-8664be14bc58] Running
	I0815 17:32:46.987459   32399 system_pods.go:89] "kube-scheduler-ha-683878-m03" [1738390e-8c78-48b7-b2cd-3beb5df2cbeb] Running
	I0815 17:32:46.987463   32399 system_pods.go:89] "kube-vip-ha-683878" [9c4a5acc-022d-4756-a0c4-6a867b22f0bb] Running
	I0815 17:32:46.987466   32399 system_pods.go:89] "kube-vip-ha-683878-m02" [041e7349-ab7d-4b80-9f0d-ea92f61d637b] Running
	I0815 17:32:46.987468   32399 system_pods.go:89] "kube-vip-ha-683878-m03" [4092675a-3aac-4e04-b507-c5434f0e3f1c] Running
	I0815 17:32:46.987471   32399 system_pods.go:89] "storage-provisioner" [78d884cc-a5c3-4f94-b643-b6593cb3f622] Running
	I0815 17:32:46.987477   32399 system_pods.go:126] duration metric: took 208.08207ms to wait for k8s-apps to be running ...
	I0815 17:32:46.987487   32399 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 17:32:46.987530   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:32:47.010760   32399 system_svc.go:56] duration metric: took 23.262262ms WaitForService to wait for kubelet
	I0815 17:32:47.010792   32399 kubeadm.go:582] duration metric: took 25.205096133s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:32:47.010818   32399 node_conditions.go:102] verifying NodePressure condition ...
	I0815 17:32:47.175223   32399 request.go:632] Waited for 164.325537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes
	I0815 17:32:47.175289   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes
	I0815 17:32:47.175294   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:47.175302   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:47.175309   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:47.179259   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:47.180358   32399 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 17:32:47.180379   32399 node_conditions.go:123] node cpu capacity is 2
	I0815 17:32:47.180390   32399 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 17:32:47.180396   32399 node_conditions.go:123] node cpu capacity is 2
	I0815 17:32:47.180401   32399 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 17:32:47.180406   32399 node_conditions.go:123] node cpu capacity is 2
	I0815 17:32:47.180412   32399 node_conditions.go:105] duration metric: took 169.587997ms to run NodePressure ...
	I0815 17:32:47.180438   32399 start.go:241] waiting for startup goroutines ...
	I0815 17:32:47.180589   32399 start.go:255] writing updated cluster config ...
	I0815 17:32:47.181028   32399 ssh_runner.go:195] Run: rm -f paused
	I0815 17:32:47.233242   32399 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 17:32:47.236171   32399 out.go:177] * Done! kubectl is now configured to use "ha-683878" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.284835215Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743386284804478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=98849c7f-afab-4e30-ab65-5656f1211c89 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.285820685Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3a48427-9b52-4bbe-b0ff-837d453be59a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.285881852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3a48427-9b52-4bbe-b0ff-837d453be59a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.286426461Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743172239012531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742979212891907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742978669055170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5b3e71b5c2a125da17643e4a273019b9d35fe1d6d57d95662dbfd5f406ed50,PodSandboxId:a27d06298c6a489e8b47e461c258245103ba3a32f1eec496e93d5eca1370e9bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723742977701324355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723742965431323988,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172374296
1580945625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c95bb7bfbe2c06a349a370026128c0969e39b88ce22dff5a060a42827c947b,PodSandboxId:2b2acfbffd44277eaa71af8c2ebca596d754e239b0a5169463913cf4eff6322f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172374295380
3914568,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9071bde150aa40cadfbb23f44a0dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723742950245715779,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d96eb3cf9f846f9c9ede73f8bbf8503748f3da80a8f919932ebe179f528d25b,PodSandboxId:89f0c6b43382e374593533db10cc93f3211f40e69e03980de951a09771ddc3ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723742950305057413,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723742950264725617,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6948597165c346c42890f5acaa78b26e33279be966f3dc48009b5d6699203d7,PodSandboxId:6934cfc4e26f2fd47881e56c3e3b6905e63593bfa19ac1ed8e2cf8558587dc0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723742950210072284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3a48427-9b52-4bbe-b0ff-837d453be59a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.325687475Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9f06aa6b-b21b-493c-b998-de34fea7afc0 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.325762985Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f06aa6b-b21b-493c-b998-de34fea7afc0 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.327422208Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=280238d8-efbb-415c-977c-6231d5d209ae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.327947630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743386327924166,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=280238d8-efbb-415c-977c-6231d5d209ae name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.328847100Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d078a501-8f35-4405-9e91-8b05137c6421 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.328901506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d078a501-8f35-4405-9e91-8b05137c6421 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.331893204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743172239012531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742979212891907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742978669055170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5b3e71b5c2a125da17643e4a273019b9d35fe1d6d57d95662dbfd5f406ed50,PodSandboxId:a27d06298c6a489e8b47e461c258245103ba3a32f1eec496e93d5eca1370e9bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723742977701324355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723742965431323988,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172374296
1580945625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c95bb7bfbe2c06a349a370026128c0969e39b88ce22dff5a060a42827c947b,PodSandboxId:2b2acfbffd44277eaa71af8c2ebca596d754e239b0a5169463913cf4eff6322f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172374295380
3914568,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9071bde150aa40cadfbb23f44a0dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723742950245715779,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d96eb3cf9f846f9c9ede73f8bbf8503748f3da80a8f919932ebe179f528d25b,PodSandboxId:89f0c6b43382e374593533db10cc93f3211f40e69e03980de951a09771ddc3ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723742950305057413,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723742950264725617,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6948597165c346c42890f5acaa78b26e33279be966f3dc48009b5d6699203d7,PodSandboxId:6934cfc4e26f2fd47881e56c3e3b6905e63593bfa19ac1ed8e2cf8558587dc0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723742950210072284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d078a501-8f35-4405-9e91-8b05137c6421 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.372769527Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93075416-d501-4a04-bed2-8d6f18f9163d name=/runtime.v1.RuntimeService/Version
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.372859051Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93075416-d501-4a04-bed2-8d6f18f9163d name=/runtime.v1.RuntimeService/Version
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.373730064Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5579b40d-9afc-4a17-9e89-2a913beabf03 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.374642955Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743386374619082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5579b40d-9afc-4a17-9e89-2a913beabf03 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.375114215Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b6b1932-fe9f-4059-8a1d-5e7a706dd370 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.375163432Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b6b1932-fe9f-4059-8a1d-5e7a706dd370 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.375542833Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743172239012531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742979212891907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742978669055170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5b3e71b5c2a125da17643e4a273019b9d35fe1d6d57d95662dbfd5f406ed50,PodSandboxId:a27d06298c6a489e8b47e461c258245103ba3a32f1eec496e93d5eca1370e9bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723742977701324355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723742965431323988,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172374296
1580945625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c95bb7bfbe2c06a349a370026128c0969e39b88ce22dff5a060a42827c947b,PodSandboxId:2b2acfbffd44277eaa71af8c2ebca596d754e239b0a5169463913cf4eff6322f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172374295380
3914568,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9071bde150aa40cadfbb23f44a0dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723742950245715779,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d96eb3cf9f846f9c9ede73f8bbf8503748f3da80a8f919932ebe179f528d25b,PodSandboxId:89f0c6b43382e374593533db10cc93f3211f40e69e03980de951a09771ddc3ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723742950305057413,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723742950264725617,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6948597165c346c42890f5acaa78b26e33279be966f3dc48009b5d6699203d7,PodSandboxId:6934cfc4e26f2fd47881e56c3e3b6905e63593bfa19ac1ed8e2cf8558587dc0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723742950210072284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b6b1932-fe9f-4059-8a1d-5e7a706dd370 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.415419938Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4a63edc6-4590-48f9-9ed3-6778992a277a name=/runtime.v1.RuntimeService/Version
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.415585009Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4a63edc6-4590-48f9-9ed3-6778992a277a name=/runtime.v1.RuntimeService/Version
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.416953807Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b5cdfff-1b5c-404c-a89a-1d75d61f3c0e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.417604390Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743386417578362,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b5cdfff-1b5c-404c-a89a-1d75d61f3c0e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.418151938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9235117e-aed6-415e-b8c3-db14ca47b0fe name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.418223947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9235117e-aed6-415e-b8c3-db14ca47b0fe name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:36:26 ha-683878 crio[682]: time="2024-08-15 17:36:26.418537608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743172239012531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742979212891907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742978669055170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5b3e71b5c2a125da17643e4a273019b9d35fe1d6d57d95662dbfd5f406ed50,PodSandboxId:a27d06298c6a489e8b47e461c258245103ba3a32f1eec496e93d5eca1370e9bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723742977701324355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723742965431323988,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172374296
1580945625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c95bb7bfbe2c06a349a370026128c0969e39b88ce22dff5a060a42827c947b,PodSandboxId:2b2acfbffd44277eaa71af8c2ebca596d754e239b0a5169463913cf4eff6322f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172374295380
3914568,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9071bde150aa40cadfbb23f44a0dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723742950245715779,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d96eb3cf9f846f9c9ede73f8bbf8503748f3da80a8f919932ebe179f528d25b,PodSandboxId:89f0c6b43382e374593533db10cc93f3211f40e69e03980de951a09771ddc3ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723742950305057413,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723742950264725617,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6948597165c346c42890f5acaa78b26e33279be966f3dc48009b5d6699203d7,PodSandboxId:6934cfc4e26f2fd47881e56c3e3b6905e63593bfa19ac1ed8e2cf8558587dc0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723742950210072284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9235117e-aed6-415e-b8c3-db14ca47b0fe name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c22e0c68e353d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   a48e946a0189a       busybox-7dff88458-lgsr4
	e2d856610b1da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   96be386135521       coredns-6f6b679f8f-c5mlj
	f085f1327c68a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   d330a801db93b       coredns-6f6b679f8f-kfczp
	8d5b3e71b5c2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   a27d06298c6a4       storage-provisioner
	78d6dea2ba166       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    7 minutes ago       Running             kindnet-cni               0                   64e069f270f02       kindnet-g8lqf
	ea81ebf55447c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      7 minutes ago       Running             kube-proxy                0                   209398e9569b4       kube-proxy-s9hw4
	b6c95bb7bfbe2       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   2b2acfbffd442       kube-vip-ha-683878
	4d96eb3cf9f84       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      7 minutes ago       Running             kube-controller-manager   0                   89f0c6b43382e       kube-controller-manager-ha-683878
	08adcf281be8a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      7 minutes ago       Running             etcd                      0                   b48feabdeccee       etcd-ha-683878
	d9b5d872cbe2c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      7 minutes ago       Running             kube-scheduler            0                   a0ca28e1760aa       kube-scheduler-ha-683878
	c6948597165c3       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      7 minutes ago       Running             kube-apiserver            0                   6934cfc4e26f2       kube-apiserver-ha-683878
	
	
	==> coredns [e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e] <==
	[INFO] 10.244.2.2:55769 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000128336s
	[INFO] 10.244.2.2:42789 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000091004s
	[INFO] 10.244.1.2:33661 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00022092s
	[INFO] 10.244.0.4:37543 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001586797s
	[INFO] 10.244.0.4:39767 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147698s
	[INFO] 10.244.0.4:56644 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00111781s
	[INFO] 10.244.0.4:57862 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081256s
	[INFO] 10.244.2.2:39974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001814889s
	[INFO] 10.244.2.2:60048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001073479s
	[INFO] 10.244.2.2:59792 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116437s
	[INFO] 10.244.2.2:60453 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162311s
	[INFO] 10.244.2.2:38063 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074865s
	[INFO] 10.244.1.2:49382 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204795s
	[INFO] 10.244.0.4:49451 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020076s
	[INFO] 10.244.0.4:36025 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090742s
	[INFO] 10.244.1.2:40041 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120543s
	[INFO] 10.244.1.2:44246 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135148s
	[INFO] 10.244.1.2:49551 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109408s
	[INFO] 10.244.0.4:54048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242835s
	[INFO] 10.244.0.4:58043 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114208s
	[INFO] 10.244.0.4:57821 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014893s
	[INFO] 10.244.0.4:60055 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059928s
	[INFO] 10.244.2.2:59967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188473s
	[INFO] 10.244.2.2:46929 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000173466s
	[INFO] 10.244.2.2:40321 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103061s
	
	
	==> coredns [f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b] <==
	[INFO] 10.244.1.2:47364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151565s
	[INFO] 10.244.1.2:55344 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.016525491s
	[INFO] 10.244.1.2:57120 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172831s
	[INFO] 10.244.1.2:55849 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.014643038s
	[INFO] 10.244.1.2:47083 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161478s
	[INFO] 10.244.1.2:45144 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142497s
	[INFO] 10.244.1.2:41019 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147233s
	[INFO] 10.244.0.4:50547 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154587s
	[INFO] 10.244.0.4:60786 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018138s
	[INFO] 10.244.0.4:51598 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011869s
	[INFO] 10.244.0.4:59583 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005686s
	[INFO] 10.244.2.2:47444 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121752s
	[INFO] 10.244.2.2:46973 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092024s
	[INFO] 10.244.2.2:42492 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092653s
	[INFO] 10.244.1.2:38440 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00026281s
	[INFO] 10.244.1.2:50999 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076764s
	[INFO] 10.244.1.2:46163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107061s
	[INFO] 10.244.0.4:36567 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099261s
	[INFO] 10.244.0.4:51415 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079336s
	[INFO] 10.244.2.2:33646 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132168s
	[INFO] 10.244.2.2:41707 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123477s
	[INFO] 10.244.2.2:46838 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090831s
	[INFO] 10.244.2.2:46347 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071615s
	[INFO] 10.244.1.2:58233 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222961s
	[INFO] 10.244.2.2:37537 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108341s
	
	
	==> describe nodes <==
	Name:               ha-683878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T17_29_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:29:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:36:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:33:21 +0000   Thu, 15 Aug 2024 17:29:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:33:21 +0000   Thu, 15 Aug 2024 17:29:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:33:21 +0000   Thu, 15 Aug 2024 17:29:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:33:21 +0000   Thu, 15 Aug 2024 17:29:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-683878
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fae4a08d40d64f788bfe5305cfe9e22b
	  System UUID:                fae4a08d-40d6-4f78-8bfe-5305cfe9e22b
	  Boot ID:                    a20b912d-dbbf-42f1-bb62-642f6b4f28ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lgsr4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-6f6b679f8f-c5mlj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m5s
	  kube-system                 coredns-6f6b679f8f-kfczp             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m5s
	  kube-system                 etcd-ha-683878                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         7m10s
	  kube-system                 kindnet-g8lqf                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m6s
	  kube-system                 kube-apiserver-ha-683878             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-controller-manager-ha-683878    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-proxy-s9hw4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m6s
	  kube-system                 kube-scheduler-ha-683878             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-vip-ha-683878                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m12s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m4s   kube-proxy       
	  Normal  Starting                 7m10s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m10s  kubelet          Node ha-683878 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m10s  kubelet          Node ha-683878 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m10s  kubelet          Node ha-683878 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m6s   node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Normal  NodeReady                6m49s  kubelet          Node ha-683878 status is now: NodeReady
	  Normal  RegisteredNode           5m15s  node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Normal  RegisteredNode           4m     node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	
	
	Name:               ha-683878-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T17_31_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:31:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:33:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 17:33:05 +0000   Thu, 15 Aug 2024 17:34:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 17:33:05 +0000   Thu, 15 Aug 2024 17:34:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 17:33:05 +0000   Thu, 15 Aug 2024 17:34:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 17:33:05 +0000   Thu, 15 Aug 2024 17:34:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-683878-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f7afa772a5e433884c57e372a6611cf
	  System UUID:                8f7afa77-2a5e-4338-84c5-7e372a6611cf
	  Boot ID:                    7d53cde9-9e38-44a8-99a7-cf7f6e592677
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j8h8r                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-683878-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m22s
	  kube-system                 kindnet-z5z9h                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m24s
	  kube-system                 kube-apiserver-ha-683878-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-controller-manager-ha-683878-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-proxy-89p4v                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 kube-scheduler-ha-683878-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-vip-ha-683878-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m24s (x8 over 5m24s)  kubelet          Node ha-683878-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m24s (x8 over 5m24s)  kubelet          Node ha-683878-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m24s (x7 over 5m24s)  kubelet          Node ha-683878-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  RegisteredNode           5m15s                  node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-683878-m02 status is now: NodeNotReady
	
	
	Name:               ha-683878-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T17_32_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:32:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:36:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:33:19 +0000   Thu, 15 Aug 2024 17:32:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:33:19 +0000   Thu, 15 Aug 2024 17:32:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:33:19 +0000   Thu, 15 Aug 2024 17:32:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:33:19 +0000   Thu, 15 Aug 2024 17:32:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-683878-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2955de94b234fe7b9772686648cfdec
	  System UUID:                e2955de9-4b23-4fe7-b977-2686648cfdec
	  Boot ID:                    59d90ed6-06e9-4243-bf30-f7876e81cc8e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-sk47b                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-683878-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m6s
	  kube-system                 kindnet-6bccr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m8s
	  kube-system                 kube-apiserver-ha-683878-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-controller-manager-ha-683878-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-proxy-8bp98                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-ha-683878-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-vip-ha-683878-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node ha-683878-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node ha-683878-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node ha-683878-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-683878-m03 event: Registered Node ha-683878-m03 in Controller
	  Normal  RegisteredNode           4m5s                 node-controller  Node ha-683878-m03 event: Registered Node ha-683878-m03 in Controller
	  Normal  RegisteredNode           4m                   node-controller  Node ha-683878-m03 event: Registered Node ha-683878-m03 in Controller
	
	
	Name:               ha-683878-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T17_33_27_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:33:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:36:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:33:57 +0000   Thu, 15 Aug 2024 17:33:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:33:57 +0000   Thu, 15 Aug 2024 17:33:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:33:57 +0000   Thu, 15 Aug 2024 17:33:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:33:57 +0000   Thu, 15 Aug 2024 17:33:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-683878-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a40a481bcbcc4fd6871392be97e352cc
	  System UUID:                a40a481b-cbcc-4fd6-8713-92be97e352cc
	  Boot ID:                    79dd6bf7-1c68-4e72-a539-a47e9aa8429f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hmfn7       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-8clcw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 2m56s            kube-proxy       
	  Normal  RegisteredNode           3m               node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet          Node ha-683878-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet          Node ha-683878-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m)  kubelet          Node ha-683878-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m56s            node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal  RegisteredNode           2m55s            node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal  NodeReady                2m40s            kubelet          Node ha-683878-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug15 17:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050086] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039163] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.758056] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.450594] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.804535] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.632720] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.064329] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054606] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[Aug15 17:29] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.110126] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.269301] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.960612] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.119022] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.056299] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.075028] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.095571] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.103797] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.010085] kauditd_printk_skb: 34 callbacks suppressed
	[ +22.762994] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf] <==
	{"level":"warn","ts":"2024-08-15T17:36:26.691079Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.693530Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.706273Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.714283Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.718165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.726900Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.734755Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.744222Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.747870Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.750939Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.752612Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.760813Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.767105Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.773040Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.777549Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.780894Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.787490Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.792986Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.799280Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.802942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.805774Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.809355Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.815263Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.821072Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:36:26.852855Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:36:26 up 7 min,  0 users,  load average: 0.22, 0.27, 0.15
	Linux ha-683878 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480] <==
	I0815 17:35:56.712883       1 main.go:299] handling current node
	I0815 17:36:06.710893       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:36:06.710964       1 main.go:299] handling current node
	I0815 17:36:06.710993       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:36:06.710999       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:36:06.711149       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:36:06.711174       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:36:06.711224       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:36:06.711245       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:36:16.709608       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:36:16.709662       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:36:16.709818       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:36:16.709843       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:36:16.709896       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:36:16.709918       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:36:16.709968       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:36:16.709989       1 main.go:299] handling current node
	I0815 17:36:26.704697       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:36:26.704784       1 main.go:299] handling current node
	I0815 17:36:26.704826       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:36:26.704833       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:36:26.704985       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:36:26.704991       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:36:26.705039       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:36:26.705043       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c6948597165c346c42890f5acaa78b26e33279be966f3dc48009b5d6699203d7] <==
	I0815 17:29:15.151418       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0815 17:29:15.161727       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.17]
	I0815 17:29:15.162918       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 17:29:15.167306       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 17:29:15.382755       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 17:29:16.548678       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 17:29:16.570988       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0815 17:29:16.587491       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 17:29:20.734008       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0815 17:29:20.994540       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0815 17:32:53.838350       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52120: use of closed network connection
	E0815 17:32:54.014807       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52146: use of closed network connection
	E0815 17:32:54.200944       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52170: use of closed network connection
	E0815 17:32:54.419875       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52196: use of closed network connection
	E0815 17:32:54.597858       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52218: use of closed network connection
	E0815 17:32:54.768077       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52236: use of closed network connection
	E0815 17:32:54.955849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52248: use of closed network connection
	E0815 17:32:55.157786       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52268: use of closed network connection
	E0815 17:32:55.343844       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52274: use of closed network connection
	E0815 17:32:55.635258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52304: use of closed network connection
	E0815 17:32:55.805339       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36266: use of closed network connection
	E0815 17:32:55.987801       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36294: use of closed network connection
	E0815 17:32:56.172253       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36320: use of closed network connection
	E0815 17:32:56.343147       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36346: use of closed network connection
	E0815 17:32:56.514633       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36362: use of closed network connection
	
	
	==> kube-controller-manager [4d96eb3cf9f846f9c9ede73f8bbf8503748f3da80a8f919932ebe179f528d25b] <==
	I0815 17:33:26.556699       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-683878-m04" podCIDRs=["10.244.3.0/24"]
	I0815 17:33:26.556761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:26.556791       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:26.574347       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:26.874723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:27.061766       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:27.337917       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:30.264213       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:30.265572       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-683878-m04"
	I0815 17:33:30.285394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:31.205384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:31.228108       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:36.778763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:46.285024       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-683878-m04"
	I0815 17:33:46.285110       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:46.301417       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:46.987205       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:57.171607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:34:37.011165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m02"
	I0815 17:34:37.011927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-683878-m04"
	I0815 17:34:37.034567       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m02"
	I0815 17:34:37.044252       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.81322ms"
	I0815 17:34:37.044658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="89.158µs"
	I0815 17:34:40.354378       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m02"
	I0815 17:34:42.280070       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m02"
	
	
	==> kube-proxy [ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 17:29:21.913244       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 17:29:21.929928       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.17"]
	E0815 17:29:21.930207       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:29:21.968539       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 17:29:21.968623       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 17:29:21.968663       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:29:21.971250       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:29:21.971675       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:29:21.971868       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:29:21.973356       1 config.go:197] "Starting service config controller"
	I0815 17:29:21.973423       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:29:21.973573       1 config.go:326] "Starting node config controller"
	I0815 17:29:21.973598       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:29:21.973540       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:29:21.973728       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:29:22.074190       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 17:29:22.074267       1 shared_informer.go:320] Caches are synced for service config
	I0815 17:29:22.074282       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f] <==
	I0815 17:32:18.642595       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6bccr" node="ha-683878-m03"
	E0815 17:32:18.666218       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8bp98\": pod kube-proxy-8bp98 is already assigned to node \"ha-683878-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8bp98" node="ha-683878-m03"
	E0815 17:32:18.668346       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 009b24bb-3d29-4ba6-b18f-0694f7479636(kube-system/kube-proxy-8bp98) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8bp98"
	E0815 17:32:18.668396       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8bp98\": pod kube-proxy-8bp98 is already assigned to node \"ha-683878-m03\"" pod="kube-system/kube-proxy-8bp98"
	I0815 17:32:18.668418       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8bp98" node="ha-683878-m03"
	E0815 17:32:48.135118       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-j8h8r\": pod busybox-7dff88458-j8h8r is already assigned to node \"ha-683878-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-j8h8r" node="ha-683878-m02"
	E0815 17:32:48.135707       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6b5e6835-6da3-4460-97b8-8155d7edb3c4(default/busybox-7dff88458-j8h8r) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-j8h8r"
	E0815 17:32:48.136091       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-j8h8r\": pod busybox-7dff88458-j8h8r is already assigned to node \"ha-683878-m02\"" pod="default/busybox-7dff88458-j8h8r"
	I0815 17:32:48.136393       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-j8h8r" node="ha-683878-m02"
	E0815 17:32:48.191220       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lgsr4\": pod busybox-7dff88458-lgsr4 is already assigned to node \"ha-683878\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lgsr4" node="ha-683878"
	E0815 17:32:48.191414       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-sk47b\": pod busybox-7dff88458-sk47b is already assigned to node \"ha-683878-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-sk47b" node="ha-683878-m03"
	E0815 17:32:48.191554       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0cc66ed5-a981-4fe1-8128-f12c914a8c45(default/busybox-7dff88458-sk47b) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-sk47b"
	E0815 17:32:48.191574       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-sk47b\": pod busybox-7dff88458-sk47b is already assigned to node \"ha-683878-m03\"" pod="default/busybox-7dff88458-sk47b"
	I0815 17:32:48.191598       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-sk47b" node="ha-683878-m03"
	E0815 17:32:48.191389       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 17ac3df7-c2a0-40b5-b107-ab6a7a0417af(default/busybox-7dff88458-lgsr4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-lgsr4"
	E0815 17:32:48.191771       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lgsr4\": pod busybox-7dff88458-lgsr4 is already assigned to node \"ha-683878\"" pod="default/busybox-7dff88458-lgsr4"
	I0815 17:32:48.191899       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lgsr4" node="ha-683878"
	E0815 17:33:26.612943       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dzspw\": pod kube-proxy-dzspw is already assigned to node \"ha-683878-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dzspw" node="ha-683878-m04"
	E0815 17:33:26.613188       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod eb8dfa16-0d1d-4ff8-8692-4268881e44c8(kube-system/kube-proxy-dzspw) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-dzspw"
	E0815 17:33:26.613271       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dzspw\": pod kube-proxy-dzspw is already assigned to node \"ha-683878-m04\"" pod="kube-system/kube-proxy-dzspw"
	I0815 17:33:26.613349       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dzspw" node="ha-683878-m04"
	E0815 17:33:26.634591       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hmfn7\": pod kindnet-hmfn7 is already assigned to node \"ha-683878-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hmfn7" node="ha-683878-m04"
	E0815 17:33:26.637167       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e58e4f5f-3ee5-4fa8-87c8-6caf24492efa(kube-system/kindnet-hmfn7) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-hmfn7"
	E0815 17:33:26.637925       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hmfn7\": pod kindnet-hmfn7 is already assigned to node \"ha-683878-m04\"" pod="kube-system/kindnet-hmfn7"
	I0815 17:33:26.638049       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hmfn7" node="ha-683878-m04"
	
	
	==> kubelet <==
	Aug 15 17:35:16 ha-683878 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 17:35:16 ha-683878 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 17:35:16 ha-683878 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 17:35:16 ha-683878 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 17:35:16 ha-683878 kubelet[1316]: E0815 17:35:16.630131    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743316629828832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:35:16 ha-683878 kubelet[1316]: E0815 17:35:16.630172    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743316629828832,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:35:26 ha-683878 kubelet[1316]: E0815 17:35:26.631511    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743326631228005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:35:26 ha-683878 kubelet[1316]: E0815 17:35:26.631550    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743326631228005,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:35:36 ha-683878 kubelet[1316]: E0815 17:35:36.633755    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743336633060290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:35:36 ha-683878 kubelet[1316]: E0815 17:35:36.634019    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743336633060290,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:35:46 ha-683878 kubelet[1316]: E0815 17:35:46.635657    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743346635308064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:35:46 ha-683878 kubelet[1316]: E0815 17:35:46.635696    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743346635308064,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:35:56 ha-683878 kubelet[1316]: E0815 17:35:56.637674    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743356637189559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:35:56 ha-683878 kubelet[1316]: E0815 17:35:56.637943    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743356637189559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:06 ha-683878 kubelet[1316]: E0815 17:36:06.639680    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743366639256755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:06 ha-683878 kubelet[1316]: E0815 17:36:06.639709    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743366639256755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:16 ha-683878 kubelet[1316]: E0815 17:36:16.492428    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 17:36:16 ha-683878 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 17:36:16 ha-683878 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 17:36:16 ha-683878 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 17:36:16 ha-683878 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 17:36:16 ha-683878 kubelet[1316]: E0815 17:36:16.641194    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743376640918209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:16 ha-683878 kubelet[1316]: E0815 17:36:16.641235    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743376640918209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:26 ha-683878 kubelet[1316]: E0815 17:36:26.643141    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743386642854804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:26 ha-683878 kubelet[1316]: E0815 17:36:26.643205    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743386642854804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-683878 -n ha-683878
helpers_test.go:261: (dbg) Run:  kubectl --context ha-683878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (58.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr: exit status 3 (3.188844504s)

                                                
                                                
-- stdout --
	ha-683878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-683878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:36:31.378910   37385 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:36:31.379347   37385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:31.379364   37385 out.go:358] Setting ErrFile to fd 2...
	I0815 17:36:31.379371   37385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:31.379802   37385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:36:31.380074   37385 out.go:352] Setting JSON to false
	I0815 17:36:31.380102   37385 mustload.go:65] Loading cluster: ha-683878
	I0815 17:36:31.380194   37385 notify.go:220] Checking for updates...
	I0815 17:36:31.380746   37385 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:36:31.380762   37385 status.go:255] checking status of ha-683878 ...
	I0815 17:36:31.381135   37385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:31.381175   37385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:31.396234   37385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41155
	I0815 17:36:31.396626   37385 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:31.397155   37385 main.go:141] libmachine: Using API Version  1
	I0815 17:36:31.397170   37385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:31.397556   37385 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:31.397750   37385 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:36:31.399419   37385 status.go:330] ha-683878 host status = "Running" (err=<nil>)
	I0815 17:36:31.399437   37385 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:36:31.399699   37385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:31.399734   37385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:31.414281   37385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
	I0815 17:36:31.414640   37385 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:31.415036   37385 main.go:141] libmachine: Using API Version  1
	I0815 17:36:31.415058   37385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:31.415355   37385 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:31.415534   37385 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:36:31.418237   37385 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:31.418617   37385 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:36:31.418644   37385 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:31.418790   37385 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:36:31.419163   37385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:31.419207   37385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:31.433077   37385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I0815 17:36:31.433445   37385 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:31.433907   37385 main.go:141] libmachine: Using API Version  1
	I0815 17:36:31.433925   37385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:31.434203   37385 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:31.434369   37385 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:36:31.434541   37385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:31.434571   37385 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:36:31.437110   37385 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:31.437483   37385 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:36:31.437510   37385 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:31.437633   37385 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:36:31.437787   37385 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:36:31.437935   37385 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:36:31.438053   37385 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:36:31.518184   37385 ssh_runner.go:195] Run: systemctl --version
	I0815 17:36:31.525211   37385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:31.540686   37385 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:36:31.540720   37385 api_server.go:166] Checking apiserver status ...
	I0815 17:36:31.540781   37385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:36:31.554914   37385 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup
	W0815 17:36:31.564407   37385 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:36:31.564446   37385 ssh_runner.go:195] Run: ls
	I0815 17:36:31.569050   37385 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:36:31.574883   37385 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:36:31.574904   37385 status.go:422] ha-683878 apiserver status = Running (err=<nil>)
	I0815 17:36:31.574922   37385 status.go:257] ha-683878 status: &{Name:ha-683878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:36:31.574935   37385 status.go:255] checking status of ha-683878-m02 ...
	I0815 17:36:31.575276   37385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:31.575320   37385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:31.590050   37385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42089
	I0815 17:36:31.590475   37385 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:31.590938   37385 main.go:141] libmachine: Using API Version  1
	I0815 17:36:31.590961   37385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:31.591349   37385 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:31.591542   37385 main.go:141] libmachine: (ha-683878-m02) Calling .GetState
	I0815 17:36:31.593038   37385 status.go:330] ha-683878-m02 host status = "Running" (err=<nil>)
	I0815 17:36:31.593053   37385 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:36:31.593413   37385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:31.593445   37385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:31.609000   37385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36361
	I0815 17:36:31.609363   37385 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:31.609833   37385 main.go:141] libmachine: Using API Version  1
	I0815 17:36:31.609852   37385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:31.610137   37385 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:31.610329   37385 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:36:31.612913   37385 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:31.613307   37385 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:36:31.613327   37385 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:31.613489   37385 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:36:31.613879   37385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:31.613920   37385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:31.628579   37385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46673
	I0815 17:36:31.628935   37385 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:31.629459   37385 main.go:141] libmachine: Using API Version  1
	I0815 17:36:31.629479   37385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:31.629786   37385 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:31.630012   37385 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:36:31.630250   37385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:31.630273   37385 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:36:31.633463   37385 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:31.633950   37385 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:36:31.633972   37385 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:31.634157   37385 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:36:31.634330   37385 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:36:31.634463   37385 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:36:31.634571   37385 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	W0815 17:36:34.192775   37385 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.232:22: connect: no route to host
	W0815 17:36:34.192892   37385 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	E0815 17:36:34.192915   37385 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:36:34.192925   37385 status.go:257] ha-683878-m02 status: &{Name:ha-683878-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 17:36:34.192947   37385 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:36:34.192958   37385 status.go:255] checking status of ha-683878-m03 ...
	I0815 17:36:34.193385   37385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:34.193449   37385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:34.207900   37385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I0815 17:36:34.208337   37385 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:34.208876   37385 main.go:141] libmachine: Using API Version  1
	I0815 17:36:34.208898   37385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:34.209222   37385 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:34.209433   37385 main.go:141] libmachine: (ha-683878-m03) Calling .GetState
	I0815 17:36:34.210915   37385 status.go:330] ha-683878-m03 host status = "Running" (err=<nil>)
	I0815 17:36:34.210932   37385 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:36:34.211557   37385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:34.211659   37385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:34.226444   37385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0815 17:36:34.226835   37385 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:34.227304   37385 main.go:141] libmachine: Using API Version  1
	I0815 17:36:34.227329   37385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:34.227683   37385 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:34.227863   37385 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:36:34.230419   37385 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:34.230792   37385 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:36:34.230840   37385 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:34.230949   37385 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:36:34.231277   37385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:34.231312   37385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:34.245848   37385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41789
	I0815 17:36:34.246232   37385 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:34.246698   37385 main.go:141] libmachine: Using API Version  1
	I0815 17:36:34.246722   37385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:34.247007   37385 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:34.247213   37385 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:36:34.247452   37385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:34.247472   37385 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:36:34.250201   37385 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:34.250594   37385 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:36:34.250623   37385 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:34.250719   37385 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:36:34.250880   37385 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:36:34.251015   37385 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:36:34.251143   37385 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:36:34.327981   37385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:34.342948   37385 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:36:34.342978   37385 api_server.go:166] Checking apiserver status ...
	I0815 17:36:34.343014   37385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:36:34.356779   37385 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup
	W0815 17:36:34.366040   37385 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:36:34.366087   37385 ssh_runner.go:195] Run: ls
	I0815 17:36:34.370500   37385 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:36:34.376982   37385 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:36:34.377004   37385 status.go:422] ha-683878-m03 apiserver status = Running (err=<nil>)
	I0815 17:36:34.377012   37385 status.go:257] ha-683878-m03 status: &{Name:ha-683878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:36:34.377025   37385 status.go:255] checking status of ha-683878-m04 ...
	I0815 17:36:34.377368   37385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:34.377402   37385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:34.391935   37385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44197
	I0815 17:36:34.392348   37385 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:34.392854   37385 main.go:141] libmachine: Using API Version  1
	I0815 17:36:34.392874   37385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:34.393141   37385 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:34.393329   37385 main.go:141] libmachine: (ha-683878-m04) Calling .GetState
	I0815 17:36:34.394684   37385 status.go:330] ha-683878-m04 host status = "Running" (err=<nil>)
	I0815 17:36:34.394702   37385 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:36:34.394976   37385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:34.395008   37385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:34.409064   37385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
	I0815 17:36:34.409424   37385 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:34.409874   37385 main.go:141] libmachine: Using API Version  1
	I0815 17:36:34.409897   37385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:34.410170   37385 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:34.410349   37385 main.go:141] libmachine: (ha-683878-m04) Calling .GetIP
	I0815 17:36:34.413015   37385 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:34.413380   37385 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:36:34.413400   37385 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:34.413534   37385 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:36:34.413837   37385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:34.413870   37385 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:34.428626   37385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43475
	I0815 17:36:34.428999   37385 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:34.429467   37385 main.go:141] libmachine: Using API Version  1
	I0815 17:36:34.429486   37385 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:34.429745   37385 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:34.429895   37385 main.go:141] libmachine: (ha-683878-m04) Calling .DriverName
	I0815 17:36:34.430032   37385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:34.430061   37385 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHHostname
	I0815 17:36:34.432670   37385 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:34.433052   37385 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:36:34.433082   37385 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:34.433216   37385 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHPort
	I0815 17:36:34.433354   37385 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHKeyPath
	I0815 17:36:34.433482   37385 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHUsername
	I0815 17:36:34.433611   37385 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m04/id_rsa Username:docker}
	I0815 17:36:34.512479   37385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:34.527319   37385 status.go:257] ha-683878-m04 status: &{Name:ha-683878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr: exit status 3 (5.294802006s)

                                                
                                                
-- stdout --
	ha-683878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-683878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:36:35.416597   37485 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:36:35.416846   37485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:35.416855   37485 out.go:358] Setting ErrFile to fd 2...
	I0815 17:36:35.416858   37485 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:35.417025   37485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:36:35.417191   37485 out.go:352] Setting JSON to false
	I0815 17:36:35.417215   37485 mustload.go:65] Loading cluster: ha-683878
	I0815 17:36:35.417262   37485 notify.go:220] Checking for updates...
	I0815 17:36:35.417727   37485 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:36:35.417747   37485 status.go:255] checking status of ha-683878 ...
	I0815 17:36:35.418177   37485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:35.418239   37485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:35.438048   37485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36505
	I0815 17:36:35.438464   37485 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:35.439023   37485 main.go:141] libmachine: Using API Version  1
	I0815 17:36:35.439055   37485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:35.439422   37485 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:35.439566   37485 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:36:35.441256   37485 status.go:330] ha-683878 host status = "Running" (err=<nil>)
	I0815 17:36:35.441271   37485 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:36:35.441539   37485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:35.441574   37485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:35.456719   37485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35441
	I0815 17:36:35.457128   37485 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:35.457991   37485 main.go:141] libmachine: Using API Version  1
	I0815 17:36:35.458018   37485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:35.458315   37485 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:35.458490   37485 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:36:35.461210   37485 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:35.461639   37485 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:36:35.461663   37485 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:35.461809   37485 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:36:35.462161   37485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:35.462203   37485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:35.476585   37485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0815 17:36:35.476984   37485 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:35.477446   37485 main.go:141] libmachine: Using API Version  1
	I0815 17:36:35.477465   37485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:35.477759   37485 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:35.477969   37485 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:36:35.478175   37485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:35.478208   37485 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:36:35.480989   37485 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:35.481454   37485 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:36:35.481484   37485 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:35.481652   37485 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:36:35.481832   37485 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:36:35.481966   37485 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:36:35.482116   37485 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:36:35.560433   37485 ssh_runner.go:195] Run: systemctl --version
	I0815 17:36:35.566263   37485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:35.582070   37485 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:36:35.582102   37485 api_server.go:166] Checking apiserver status ...
	I0815 17:36:35.582137   37485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:36:35.599016   37485 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup
	W0815 17:36:35.609580   37485 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:36:35.609630   37485 ssh_runner.go:195] Run: ls
	I0815 17:36:35.615044   37485 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:36:35.622559   37485 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:36:35.622583   37485 status.go:422] ha-683878 apiserver status = Running (err=<nil>)
	I0815 17:36:35.622591   37485 status.go:257] ha-683878 status: &{Name:ha-683878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:36:35.622633   37485 status.go:255] checking status of ha-683878-m02 ...
	I0815 17:36:35.622933   37485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:35.622968   37485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:35.637660   37485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37997
	I0815 17:36:35.637991   37485 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:35.638451   37485 main.go:141] libmachine: Using API Version  1
	I0815 17:36:35.638470   37485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:35.638800   37485 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:35.638986   37485 main.go:141] libmachine: (ha-683878-m02) Calling .GetState
	I0815 17:36:35.640483   37485 status.go:330] ha-683878-m02 host status = "Running" (err=<nil>)
	I0815 17:36:35.640516   37485 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:36:35.640790   37485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:35.640821   37485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:35.654713   37485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I0815 17:36:35.655149   37485 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:35.655624   37485 main.go:141] libmachine: Using API Version  1
	I0815 17:36:35.655649   37485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:35.655927   37485 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:35.656093   37485 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:36:35.658723   37485 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:35.659165   37485 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:36:35.659205   37485 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:35.659366   37485 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:36:35.659666   37485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:35.659706   37485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:35.674990   37485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40159
	I0815 17:36:35.675352   37485 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:35.675873   37485 main.go:141] libmachine: Using API Version  1
	I0815 17:36:35.675895   37485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:35.676171   37485 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:35.676364   37485 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:36:35.676608   37485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:35.676631   37485 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:36:35.679298   37485 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:35.679677   37485 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:36:35.679693   37485 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:35.679844   37485 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:36:35.679969   37485 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:36:35.680131   37485 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:36:35.680236   37485 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	W0815 17:36:37.264849   37485 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:36:37.264922   37485 retry.go:31] will retry after 199.862707ms: dial tcp 192.168.39.232:22: connect: no route to host
	W0815 17:36:40.336829   37485 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.232:22: connect: no route to host
	W0815 17:36:40.336930   37485 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	E0815 17:36:40.336948   37485 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:36:40.336955   37485 status.go:257] ha-683878-m02 status: &{Name:ha-683878-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 17:36:40.336973   37485 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:36:40.336981   37485 status.go:255] checking status of ha-683878-m03 ...
	I0815 17:36:40.337293   37485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:40.337331   37485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:40.351842   37485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46789
	I0815 17:36:40.352243   37485 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:40.352728   37485 main.go:141] libmachine: Using API Version  1
	I0815 17:36:40.352749   37485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:40.353058   37485 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:40.353243   37485 main.go:141] libmachine: (ha-683878-m03) Calling .GetState
	I0815 17:36:40.354799   37485 status.go:330] ha-683878-m03 host status = "Running" (err=<nil>)
	I0815 17:36:40.354814   37485 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:36:40.355134   37485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:40.355179   37485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:40.369456   37485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I0815 17:36:40.369832   37485 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:40.370259   37485 main.go:141] libmachine: Using API Version  1
	I0815 17:36:40.370280   37485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:40.370629   37485 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:40.370807   37485 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:36:40.373588   37485 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:40.374035   37485 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:36:40.374062   37485 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:40.374207   37485 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:36:40.374565   37485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:40.374623   37485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:40.388814   37485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34715
	I0815 17:36:40.389154   37485 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:40.389567   37485 main.go:141] libmachine: Using API Version  1
	I0815 17:36:40.389585   37485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:40.389851   37485 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:40.390020   37485 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:36:40.390190   37485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:40.390210   37485 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:36:40.392863   37485 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:40.393288   37485 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:36:40.393325   37485 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:40.393435   37485 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:36:40.393605   37485 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:36:40.393743   37485 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:36:40.393850   37485 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:36:40.472590   37485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:40.489220   37485 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:36:40.489247   37485 api_server.go:166] Checking apiserver status ...
	I0815 17:36:40.489277   37485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:36:40.503772   37485 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup
	W0815 17:36:40.513274   37485 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:36:40.513327   37485 ssh_runner.go:195] Run: ls
	I0815 17:36:40.517592   37485 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:36:40.522128   37485 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:36:40.522147   37485 status.go:422] ha-683878-m03 apiserver status = Running (err=<nil>)
	I0815 17:36:40.522155   37485 status.go:257] ha-683878-m03 status: &{Name:ha-683878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:36:40.522169   37485 status.go:255] checking status of ha-683878-m04 ...
	I0815 17:36:40.522443   37485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:40.522472   37485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:40.536804   37485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35589
	I0815 17:36:40.537205   37485 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:40.537713   37485 main.go:141] libmachine: Using API Version  1
	I0815 17:36:40.537735   37485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:40.538052   37485 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:40.538278   37485 main.go:141] libmachine: (ha-683878-m04) Calling .GetState
	I0815 17:36:40.539646   37485 status.go:330] ha-683878-m04 host status = "Running" (err=<nil>)
	I0815 17:36:40.539658   37485 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:36:40.539926   37485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:40.539966   37485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:40.554192   37485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45173
	I0815 17:36:40.554554   37485 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:40.555001   37485 main.go:141] libmachine: Using API Version  1
	I0815 17:36:40.555020   37485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:40.555271   37485 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:40.555441   37485 main.go:141] libmachine: (ha-683878-m04) Calling .GetIP
	I0815 17:36:40.557989   37485 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:40.558330   37485 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:36:40.558352   37485 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:40.558523   37485 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:36:40.558850   37485 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:40.558885   37485 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:40.572911   37485 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46749
	I0815 17:36:40.573456   37485 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:40.573905   37485 main.go:141] libmachine: Using API Version  1
	I0815 17:36:40.573924   37485 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:40.574217   37485 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:40.574436   37485 main.go:141] libmachine: (ha-683878-m04) Calling .DriverName
	I0815 17:36:40.574603   37485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:40.574621   37485 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHHostname
	I0815 17:36:40.577243   37485 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:40.577626   37485 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:36:40.577647   37485 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:40.577794   37485 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHPort
	I0815 17:36:40.577943   37485 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHKeyPath
	I0815 17:36:40.578081   37485 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHUsername
	I0815 17:36:40.578220   37485 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m04/id_rsa Username:docker}
	I0815 17:36:40.655828   37485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:40.670320   37485 status.go:257] ha-683878-m04 status: &{Name:ha-683878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr: exit status 3 (5.013876172s)

                                                
                                                
-- stdout --
	ha-683878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-683878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:36:41.860559   37602 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:36:41.860834   37602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:41.860844   37602 out.go:358] Setting ErrFile to fd 2...
	I0815 17:36:41.860848   37602 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:41.861031   37602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:36:41.861202   37602 out.go:352] Setting JSON to false
	I0815 17:36:41.861226   37602 mustload.go:65] Loading cluster: ha-683878
	I0815 17:36:41.861313   37602 notify.go:220] Checking for updates...
	I0815 17:36:41.861652   37602 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:36:41.861668   37602 status.go:255] checking status of ha-683878 ...
	I0815 17:36:41.862032   37602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:41.862103   37602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:41.879925   37602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35133
	I0815 17:36:41.880316   37602 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:41.880862   37602 main.go:141] libmachine: Using API Version  1
	I0815 17:36:41.880886   37602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:41.881317   37602 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:41.881535   37602 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:36:41.882994   37602 status.go:330] ha-683878 host status = "Running" (err=<nil>)
	I0815 17:36:41.883011   37602 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:36:41.883388   37602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:41.883437   37602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:41.898087   37602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0815 17:36:41.898472   37602 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:41.898899   37602 main.go:141] libmachine: Using API Version  1
	I0815 17:36:41.898919   37602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:41.899230   37602 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:41.899396   37602 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:36:41.901774   37602 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:41.902207   37602 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:36:41.902231   37602 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:41.902442   37602 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:36:41.902756   37602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:41.902799   37602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:41.916936   37602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37693
	I0815 17:36:41.917360   37602 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:41.917841   37602 main.go:141] libmachine: Using API Version  1
	I0815 17:36:41.917860   37602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:41.918162   37602 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:41.918340   37602 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:36:41.918513   37602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:41.918539   37602 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:36:41.921053   37602 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:41.921479   37602 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:36:41.921512   37602 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:41.921649   37602 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:36:41.921804   37602 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:36:41.921940   37602 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:36:41.922057   37602 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:36:42.000622   37602 ssh_runner.go:195] Run: systemctl --version
	I0815 17:36:42.006871   37602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:42.022123   37602 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:36:42.022155   37602 api_server.go:166] Checking apiserver status ...
	I0815 17:36:42.022211   37602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:36:42.036817   37602 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup
	W0815 17:36:42.046877   37602 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:36:42.046949   37602 ssh_runner.go:195] Run: ls
	I0815 17:36:42.052505   37602 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:36:42.056696   37602 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:36:42.056716   37602 status.go:422] ha-683878 apiserver status = Running (err=<nil>)
	I0815 17:36:42.056724   37602 status.go:257] ha-683878 status: &{Name:ha-683878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:36:42.056738   37602 status.go:255] checking status of ha-683878-m02 ...
	I0815 17:36:42.057013   37602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:42.057047   37602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:42.071346   37602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0815 17:36:42.071747   37602 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:42.072156   37602 main.go:141] libmachine: Using API Version  1
	I0815 17:36:42.072183   37602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:42.072532   37602 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:42.072721   37602 main.go:141] libmachine: (ha-683878-m02) Calling .GetState
	I0815 17:36:42.074419   37602 status.go:330] ha-683878-m02 host status = "Running" (err=<nil>)
	I0815 17:36:42.074433   37602 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:36:42.074750   37602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:42.074806   37602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:42.089061   37602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37625
	I0815 17:36:42.089467   37602 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:42.089919   37602 main.go:141] libmachine: Using API Version  1
	I0815 17:36:42.089937   37602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:42.090249   37602 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:42.090450   37602 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:36:42.093444   37602 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:42.093861   37602 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:36:42.093889   37602 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:42.094043   37602 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:36:42.094391   37602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:42.094440   37602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:42.108694   37602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39583
	I0815 17:36:42.109058   37602 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:42.109469   37602 main.go:141] libmachine: Using API Version  1
	I0815 17:36:42.109485   37602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:42.109765   37602 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:42.109954   37602 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:36:42.110097   37602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:42.110117   37602 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:36:42.112837   37602 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:42.113236   37602 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:36:42.113254   37602 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:42.113431   37602 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:36:42.113616   37602 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:36:42.113764   37602 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:36:42.113902   37602 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	W0815 17:36:43.408822   37602 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:36:43.408870   37602 retry.go:31] will retry after 321.460228ms: dial tcp 192.168.39.232:22: connect: no route to host
	W0815 17:36:46.480791   37602 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.232:22: connect: no route to host
	W0815 17:36:46.480883   37602 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	E0815 17:36:46.480900   37602 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:36:46.480907   37602 status.go:257] ha-683878-m02 status: &{Name:ha-683878-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 17:36:46.480933   37602 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:36:46.480941   37602 status.go:255] checking status of ha-683878-m03 ...
	I0815 17:36:46.481244   37602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:46.481278   37602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:46.495996   37602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I0815 17:36:46.496391   37602 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:46.496918   37602 main.go:141] libmachine: Using API Version  1
	I0815 17:36:46.496933   37602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:46.497229   37602 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:46.497381   37602 main.go:141] libmachine: (ha-683878-m03) Calling .GetState
	I0815 17:36:46.499046   37602 status.go:330] ha-683878-m03 host status = "Running" (err=<nil>)
	I0815 17:36:46.499063   37602 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:36:46.499473   37602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:46.499528   37602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:46.513425   37602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35165
	I0815 17:36:46.513761   37602 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:46.514175   37602 main.go:141] libmachine: Using API Version  1
	I0815 17:36:46.514194   37602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:46.514514   37602 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:46.514691   37602 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:36:46.517081   37602 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:46.517490   37602 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:36:46.517519   37602 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:46.517622   37602 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:36:46.517926   37602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:46.517963   37602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:46.531604   37602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40297
	I0815 17:36:46.531941   37602 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:46.532327   37602 main.go:141] libmachine: Using API Version  1
	I0815 17:36:46.532347   37602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:46.532685   37602 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:46.532834   37602 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:36:46.533005   37602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:46.533024   37602 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:36:46.535297   37602 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:46.535646   37602 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:36:46.535669   37602 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:46.535797   37602 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:36:46.535962   37602 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:36:46.536137   37602 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:36:46.536280   37602 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:36:46.616423   37602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:46.636446   37602 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:36:46.636471   37602 api_server.go:166] Checking apiserver status ...
	I0815 17:36:46.636526   37602 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:36:46.653371   37602 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup
	W0815 17:36:46.665260   37602 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:36:46.665299   37602 ssh_runner.go:195] Run: ls
	I0815 17:36:46.669877   37602 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:36:46.674297   37602 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:36:46.674314   37602 status.go:422] ha-683878-m03 apiserver status = Running (err=<nil>)
	I0815 17:36:46.674322   37602 status.go:257] ha-683878-m03 status: &{Name:ha-683878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:36:46.674343   37602 status.go:255] checking status of ha-683878-m04 ...
	I0815 17:36:46.674623   37602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:46.674651   37602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:46.689447   37602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0815 17:36:46.689823   37602 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:46.690231   37602 main.go:141] libmachine: Using API Version  1
	I0815 17:36:46.690248   37602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:46.690566   37602 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:46.690740   37602 main.go:141] libmachine: (ha-683878-m04) Calling .GetState
	I0815 17:36:46.692193   37602 status.go:330] ha-683878-m04 host status = "Running" (err=<nil>)
	I0815 17:36:46.692210   37602 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:36:46.692480   37602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:46.692527   37602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:46.706509   37602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42865
	I0815 17:36:46.706855   37602 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:46.707256   37602 main.go:141] libmachine: Using API Version  1
	I0815 17:36:46.707278   37602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:46.707582   37602 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:46.707722   37602 main.go:141] libmachine: (ha-683878-m04) Calling .GetIP
	I0815 17:36:46.711206   37602 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:46.711664   37602 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:36:46.711698   37602 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:46.711821   37602 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:36:46.712126   37602 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:46.712175   37602 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:46.726512   37602 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0815 17:36:46.726947   37602 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:46.727460   37602 main.go:141] libmachine: Using API Version  1
	I0815 17:36:46.727477   37602 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:46.727763   37602 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:46.727932   37602 main.go:141] libmachine: (ha-683878-m04) Calling .DriverName
	I0815 17:36:46.728109   37602 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:46.728138   37602 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHHostname
	I0815 17:36:46.730860   37602 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:46.731294   37602 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:36:46.731326   37602 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:46.731461   37602 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHPort
	I0815 17:36:46.731619   37602 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHKeyPath
	I0815 17:36:46.731742   37602 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHUsername
	I0815 17:36:46.731855   37602 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m04/id_rsa Username:docker}
	I0815 17:36:46.815573   37602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:46.831556   37602 status.go:257] ha-683878-m04 status: &{Name:ha-683878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr: exit status 3 (4.47309093s)

                                                
                                                
-- stdout --
	ha-683878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-683878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:36:48.658210   37702 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:36:48.658317   37702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:48.658325   37702 out.go:358] Setting ErrFile to fd 2...
	I0815 17:36:48.658329   37702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:48.658488   37702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:36:48.658632   37702 out.go:352] Setting JSON to false
	I0815 17:36:48.658656   37702 mustload.go:65] Loading cluster: ha-683878
	I0815 17:36:48.658681   37702 notify.go:220] Checking for updates...
	I0815 17:36:48.658984   37702 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:36:48.658996   37702 status.go:255] checking status of ha-683878 ...
	I0815 17:36:48.659332   37702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:48.659368   37702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:48.678734   37702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I0815 17:36:48.679138   37702 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:48.679774   37702 main.go:141] libmachine: Using API Version  1
	I0815 17:36:48.679801   37702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:48.680117   37702 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:48.680334   37702 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:36:48.681876   37702 status.go:330] ha-683878 host status = "Running" (err=<nil>)
	I0815 17:36:48.681891   37702 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:36:48.682181   37702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:48.682209   37702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:48.698067   37702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35223
	I0815 17:36:48.698466   37702 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:48.698951   37702 main.go:141] libmachine: Using API Version  1
	I0815 17:36:48.698974   37702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:48.699282   37702 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:48.699459   37702 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:36:48.701811   37702 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:48.702221   37702 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:36:48.702251   37702 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:48.702358   37702 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:36:48.702632   37702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:48.702679   37702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:48.719512   37702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35957
	I0815 17:36:48.719924   37702 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:48.720395   37702 main.go:141] libmachine: Using API Version  1
	I0815 17:36:48.720415   37702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:48.720761   37702 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:48.720948   37702 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:36:48.721155   37702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:48.721189   37702 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:36:48.723789   37702 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:48.724181   37702 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:36:48.724212   37702 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:48.724344   37702 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:36:48.724503   37702 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:36:48.724627   37702 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:36:48.724749   37702 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:36:48.804003   37702 ssh_runner.go:195] Run: systemctl --version
	I0815 17:36:48.814932   37702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:48.829525   37702 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:36:48.829561   37702 api_server.go:166] Checking apiserver status ...
	I0815 17:36:48.829603   37702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:36:48.843821   37702 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup
	W0815 17:36:48.854547   37702 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:36:48.854590   37702 ssh_runner.go:195] Run: ls
	I0815 17:36:48.858750   37702 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:36:48.863174   37702 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:36:48.863197   37702 status.go:422] ha-683878 apiserver status = Running (err=<nil>)
	I0815 17:36:48.863209   37702 status.go:257] ha-683878 status: &{Name:ha-683878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:36:48.863240   37702 status.go:255] checking status of ha-683878-m02 ...
	I0815 17:36:48.863576   37702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:48.863609   37702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:48.879151   37702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37841
	I0815 17:36:48.879589   37702 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:48.879998   37702 main.go:141] libmachine: Using API Version  1
	I0815 17:36:48.880016   37702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:48.880321   37702 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:48.880521   37702 main.go:141] libmachine: (ha-683878-m02) Calling .GetState
	I0815 17:36:48.881845   37702 status.go:330] ha-683878-m02 host status = "Running" (err=<nil>)
	I0815 17:36:48.881861   37702 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:36:48.882166   37702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:48.882200   37702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:48.896739   37702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33771
	I0815 17:36:48.897132   37702 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:48.897570   37702 main.go:141] libmachine: Using API Version  1
	I0815 17:36:48.897595   37702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:48.897831   37702 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:48.897934   37702 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:36:48.900159   37702 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:48.900536   37702 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:36:48.900562   37702 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:48.900698   37702 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:36:48.901013   37702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:48.901053   37702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:48.915381   37702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35567
	I0815 17:36:48.915766   37702 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:48.916168   37702 main.go:141] libmachine: Using API Version  1
	I0815 17:36:48.916188   37702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:48.916482   37702 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:48.916688   37702 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:36:48.916865   37702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:48.916886   37702 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:36:48.919233   37702 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:48.919610   37702 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:36:48.919645   37702 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:48.919751   37702 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:36:48.919903   37702 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:36:48.920056   37702 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:36:48.920190   37702 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	W0815 17:36:49.556768   37702 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:36:49.556826   37702 retry.go:31] will retry after 139.048321ms: dial tcp 192.168.39.232:22: connect: no route to host
	W0815 17:36:52.752734   37702 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.232:22: connect: no route to host
	W0815 17:36:52.752834   37702 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	E0815 17:36:52.752852   37702 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:36:52.752862   37702 status.go:257] ha-683878-m02 status: &{Name:ha-683878-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 17:36:52.752913   37702 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:36:52.752925   37702 status.go:255] checking status of ha-683878-m03 ...
	I0815 17:36:52.753367   37702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:52.753426   37702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:52.768515   37702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36349
	I0815 17:36:52.768984   37702 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:52.769531   37702 main.go:141] libmachine: Using API Version  1
	I0815 17:36:52.769555   37702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:52.769811   37702 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:52.769971   37702 main.go:141] libmachine: (ha-683878-m03) Calling .GetState
	I0815 17:36:52.771369   37702 status.go:330] ha-683878-m03 host status = "Running" (err=<nil>)
	I0815 17:36:52.771385   37702 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:36:52.771676   37702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:52.771713   37702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:52.786357   37702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37951
	I0815 17:36:52.786720   37702 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:52.787111   37702 main.go:141] libmachine: Using API Version  1
	I0815 17:36:52.787132   37702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:52.787423   37702 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:52.787604   37702 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:36:52.790785   37702 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:52.791293   37702 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:36:52.791314   37702 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:52.791473   37702 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:36:52.791810   37702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:52.791893   37702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:52.805722   37702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0815 17:36:52.806085   37702 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:52.806528   37702 main.go:141] libmachine: Using API Version  1
	I0815 17:36:52.806542   37702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:52.806864   37702 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:52.807056   37702 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:36:52.807220   37702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:52.807237   37702 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:36:52.809695   37702 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:52.810076   37702 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:36:52.810094   37702 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:36:52.810274   37702 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:36:52.810412   37702 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:36:52.810569   37702 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:36:52.810695   37702 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:36:52.888631   37702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:52.904006   37702 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:36:52.904033   37702 api_server.go:166] Checking apiserver status ...
	I0815 17:36:52.904065   37702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:36:52.918740   37702 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup
	W0815 17:36:52.929518   37702 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:36:52.929571   37702 ssh_runner.go:195] Run: ls
	I0815 17:36:52.933928   37702 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:36:52.939876   37702 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:36:52.939901   37702 status.go:422] ha-683878-m03 apiserver status = Running (err=<nil>)
	I0815 17:36:52.939911   37702 status.go:257] ha-683878-m03 status: &{Name:ha-683878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:36:52.939933   37702 status.go:255] checking status of ha-683878-m04 ...
	I0815 17:36:52.940387   37702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:52.940442   37702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:52.954866   37702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33915
	I0815 17:36:52.955299   37702 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:52.955720   37702 main.go:141] libmachine: Using API Version  1
	I0815 17:36:52.955737   37702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:52.956015   37702 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:52.956167   37702 main.go:141] libmachine: (ha-683878-m04) Calling .GetState
	I0815 17:36:52.957664   37702 status.go:330] ha-683878-m04 host status = "Running" (err=<nil>)
	I0815 17:36:52.957681   37702 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:36:52.957987   37702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:52.958022   37702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:52.971975   37702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41029
	I0815 17:36:52.972328   37702 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:52.972779   37702 main.go:141] libmachine: Using API Version  1
	I0815 17:36:52.972803   37702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:52.973094   37702 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:52.973261   37702 main.go:141] libmachine: (ha-683878-m04) Calling .GetIP
	I0815 17:36:52.976010   37702 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:52.976500   37702 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:36:52.976534   37702 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:52.976700   37702 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:36:52.976994   37702 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:52.977031   37702 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:52.992055   37702 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
	I0815 17:36:52.992465   37702 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:52.992913   37702 main.go:141] libmachine: Using API Version  1
	I0815 17:36:52.992937   37702 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:52.993264   37702 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:52.993437   37702 main.go:141] libmachine: (ha-683878-m04) Calling .DriverName
	I0815 17:36:52.993606   37702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:52.993620   37702 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHHostname
	I0815 17:36:52.995996   37702 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:52.996343   37702 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:36:52.996370   37702 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:36:52.996528   37702 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHPort
	I0815 17:36:52.996688   37702 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHKeyPath
	I0815 17:36:52.996842   37702 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHUsername
	I0815 17:36:52.996951   37702 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m04/id_rsa Username:docker}
	I0815 17:36:53.075405   37702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:53.089579   37702 status.go:257] ha-683878-m04 status: &{Name:ha-683878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr: exit status 3 (3.69106205s)

                                                
                                                
-- stdout --
	ha-683878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-683878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:36:57.479834   37819 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:36:57.479973   37819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:57.479985   37819 out.go:358] Setting ErrFile to fd 2...
	I0815 17:36:57.479991   37819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:57.480250   37819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:36:57.480466   37819 out.go:352] Setting JSON to false
	I0815 17:36:57.480516   37819 mustload.go:65] Loading cluster: ha-683878
	I0815 17:36:57.480608   37819 notify.go:220] Checking for updates...
	I0815 17:36:57.480999   37819 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:36:57.481020   37819 status.go:255] checking status of ha-683878 ...
	I0815 17:36:57.481473   37819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:57.481540   37819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:57.496147   37819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0815 17:36:57.496579   37819 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:57.497055   37819 main.go:141] libmachine: Using API Version  1
	I0815 17:36:57.497096   37819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:57.497538   37819 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:57.497738   37819 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:36:57.499211   37819 status.go:330] ha-683878 host status = "Running" (err=<nil>)
	I0815 17:36:57.499228   37819 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:36:57.499553   37819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:57.499593   37819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:57.513643   37819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44597
	I0815 17:36:57.513962   37819 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:57.514360   37819 main.go:141] libmachine: Using API Version  1
	I0815 17:36:57.514377   37819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:57.514612   37819 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:57.514781   37819 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:36:57.517417   37819 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:57.517795   37819 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:36:57.517820   37819 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:57.517946   37819 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:36:57.518217   37819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:57.518249   37819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:57.531994   37819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I0815 17:36:57.532376   37819 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:57.532815   37819 main.go:141] libmachine: Using API Version  1
	I0815 17:36:57.532833   37819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:57.533154   37819 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:57.533309   37819 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:36:57.533478   37819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:57.533504   37819 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:36:57.536294   37819 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:57.536781   37819 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:36:57.536805   37819 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:36:57.536987   37819 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:36:57.537162   37819 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:36:57.537309   37819 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:36:57.537422   37819 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:36:57.617199   37819 ssh_runner.go:195] Run: systemctl --version
	I0815 17:36:57.624000   37819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:36:57.637843   37819 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:36:57.637881   37819 api_server.go:166] Checking apiserver status ...
	I0815 17:36:57.637917   37819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:36:57.652824   37819 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup
	W0815 17:36:57.662039   37819 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:36:57.662071   37819 ssh_runner.go:195] Run: ls
	I0815 17:36:57.666880   37819 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:36:57.672902   37819 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:36:57.672922   37819 status.go:422] ha-683878 apiserver status = Running (err=<nil>)
	I0815 17:36:57.672934   37819 status.go:257] ha-683878 status: &{Name:ha-683878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:36:57.672954   37819 status.go:255] checking status of ha-683878-m02 ...
	I0815 17:36:57.673246   37819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:57.673286   37819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:57.687861   37819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40469
	I0815 17:36:57.688256   37819 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:57.688682   37819 main.go:141] libmachine: Using API Version  1
	I0815 17:36:57.688705   37819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:57.688994   37819 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:57.689156   37819 main.go:141] libmachine: (ha-683878-m02) Calling .GetState
	I0815 17:36:57.690570   37819 status.go:330] ha-683878-m02 host status = "Running" (err=<nil>)
	I0815 17:36:57.690585   37819 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:36:57.690870   37819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:57.690898   37819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:57.705626   37819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43209
	I0815 17:36:57.706019   37819 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:57.706508   37819 main.go:141] libmachine: Using API Version  1
	I0815 17:36:57.706537   37819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:57.706824   37819 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:57.707012   37819 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:36:57.709612   37819 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:57.710057   37819 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:36:57.710085   37819 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:57.710242   37819 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:36:57.710654   37819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:36:57.710738   37819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:36:57.725384   37819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34017
	I0815 17:36:57.725714   37819 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:36:57.726075   37819 main.go:141] libmachine: Using API Version  1
	I0815 17:36:57.726098   37819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:36:57.726340   37819 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:36:57.726517   37819 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:36:57.726703   37819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:36:57.726720   37819 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:36:57.729069   37819 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:57.729392   37819 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:36:57.729404   37819 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:36:57.729552   37819 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:36:57.729721   37819 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:36:57.729883   37819 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:36:57.730019   37819 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	W0815 17:37:00.784706   37819 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.232:22: connect: no route to host
	W0815 17:37:00.784810   37819 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	E0815 17:37:00.784833   37819 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:37:00.784846   37819 status.go:257] ha-683878-m02 status: &{Name:ha-683878-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0815 17:37:00.784868   37819 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.232:22: connect: no route to host
	I0815 17:37:00.784879   37819 status.go:255] checking status of ha-683878-m03 ...
	I0815 17:37:00.785297   37819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:00.785339   37819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:00.799737   37819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43211
	I0815 17:37:00.800107   37819 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:00.800598   37819 main.go:141] libmachine: Using API Version  1
	I0815 17:37:00.800617   37819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:00.800986   37819 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:00.801170   37819 main.go:141] libmachine: (ha-683878-m03) Calling .GetState
	I0815 17:37:00.802821   37819 status.go:330] ha-683878-m03 host status = "Running" (err=<nil>)
	I0815 17:37:00.802839   37819 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:37:00.803111   37819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:00.803141   37819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:00.817066   37819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
	I0815 17:37:00.817434   37819 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:00.817866   37819 main.go:141] libmachine: Using API Version  1
	I0815 17:37:00.817886   37819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:00.818194   37819 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:00.818424   37819 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:37:00.821013   37819 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:00.821438   37819 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:37:00.821459   37819 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:00.821593   37819 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:37:00.821894   37819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:00.821935   37819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:00.835879   37819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34235
	I0815 17:37:00.836211   37819 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:00.836648   37819 main.go:141] libmachine: Using API Version  1
	I0815 17:37:00.836668   37819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:00.836933   37819 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:00.837110   37819 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:37:00.837275   37819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:37:00.837293   37819 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:37:00.839931   37819 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:00.840338   37819 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:37:00.840369   37819 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:00.840522   37819 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:37:00.840694   37819 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:37:00.840824   37819 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:37:00.840941   37819 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:37:00.920152   37819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:37:00.936674   37819 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:37:00.936705   37819 api_server.go:166] Checking apiserver status ...
	I0815 17:37:00.936754   37819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:37:00.954434   37819 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup
	W0815 17:37:00.964546   37819 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:37:00.964609   37819 ssh_runner.go:195] Run: ls
	I0815 17:37:00.969100   37819 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:37:00.975270   37819 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:37:00.975293   37819 status.go:422] ha-683878-m03 apiserver status = Running (err=<nil>)
	I0815 17:37:00.975300   37819 status.go:257] ha-683878-m03 status: &{Name:ha-683878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:37:00.975314   37819 status.go:255] checking status of ha-683878-m04 ...
	I0815 17:37:00.975593   37819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:00.975640   37819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:00.990453   37819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38587
	I0815 17:37:00.990815   37819 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:00.991235   37819 main.go:141] libmachine: Using API Version  1
	I0815 17:37:00.991266   37819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:00.991570   37819 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:00.991796   37819 main.go:141] libmachine: (ha-683878-m04) Calling .GetState
	I0815 17:37:00.993346   37819 status.go:330] ha-683878-m04 host status = "Running" (err=<nil>)
	I0815 17:37:00.993362   37819 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:37:00.993630   37819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:00.993663   37819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:01.008870   37819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38113
	I0815 17:37:01.009225   37819 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:01.009636   37819 main.go:141] libmachine: Using API Version  1
	I0815 17:37:01.009662   37819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:01.009945   37819 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:01.010125   37819 main.go:141] libmachine: (ha-683878-m04) Calling .GetIP
	I0815 17:37:01.012672   37819 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:01.013056   37819 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:37:01.013080   37819 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:01.013223   37819 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:37:01.013505   37819 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:01.013540   37819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:01.027804   37819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I0815 17:37:01.028208   37819 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:01.028638   37819 main.go:141] libmachine: Using API Version  1
	I0815 17:37:01.028661   37819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:01.028976   37819 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:01.029160   37819 main.go:141] libmachine: (ha-683878-m04) Calling .DriverName
	I0815 17:37:01.029361   37819 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:37:01.029379   37819 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHHostname
	I0815 17:37:01.032094   37819 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:01.032582   37819 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:37:01.032611   37819 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:01.032745   37819 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHPort
	I0815 17:37:01.032879   37819 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHKeyPath
	I0815 17:37:01.033021   37819 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHUsername
	I0815 17:37:01.033128   37819 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m04/id_rsa Username:docker}
	I0815 17:37:01.111527   37819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:37:01.127293   37819 status.go:257] ha-683878-m04 status: &{Name:ha-683878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr: exit status 7 (750.576941ms)

                                                
                                                
-- stdout --
	ha-683878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-683878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:37:07.629663   37947 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:37:07.629789   37947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:37:07.629799   37947 out.go:358] Setting ErrFile to fd 2...
	I0815 17:37:07.629805   37947 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:37:07.629977   37947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:37:07.630183   37947 out.go:352] Setting JSON to false
	I0815 17:37:07.630211   37947 mustload.go:65] Loading cluster: ha-683878
	I0815 17:37:07.630243   37947 notify.go:220] Checking for updates...
	I0815 17:37:07.630594   37947 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:37:07.630610   37947 status.go:255] checking status of ha-683878 ...
	I0815 17:37:07.630993   37947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:07.631057   37947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:07.648667   37947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38829
	I0815 17:37:07.649142   37947 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:07.649720   37947 main.go:141] libmachine: Using API Version  1
	I0815 17:37:07.649746   37947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:07.650107   37947 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:07.650298   37947 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:37:07.651857   37947 status.go:330] ha-683878 host status = "Running" (err=<nil>)
	I0815 17:37:07.651873   37947 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:37:07.652141   37947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:07.652200   37947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:07.669445   37947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35877
	I0815 17:37:07.669814   37947 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:07.670343   37947 main.go:141] libmachine: Using API Version  1
	I0815 17:37:07.670382   37947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:07.670692   37947 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:07.670864   37947 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:37:07.673705   37947 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:37:07.674099   37947 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:37:07.674125   37947 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:37:07.674253   37947 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:37:07.674585   37947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:07.674626   37947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:07.688802   37947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36293
	I0815 17:37:07.689238   37947 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:07.689691   37947 main.go:141] libmachine: Using API Version  1
	I0815 17:37:07.689705   37947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:07.689993   37947 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:07.690184   37947 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:37:07.690410   37947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:37:07.690447   37947 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:37:07.693611   37947 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:37:07.694047   37947 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:37:07.694094   37947 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:37:07.694183   37947 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:37:07.694336   37947 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:37:07.694492   37947 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:37:07.694617   37947 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:37:07.777186   37947 ssh_runner.go:195] Run: systemctl --version
	I0815 17:37:07.783953   37947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:37:07.800653   37947 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:37:07.800679   37947 api_server.go:166] Checking apiserver status ...
	I0815 17:37:07.800710   37947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:37:07.818212   37947 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup
	W0815 17:37:07.829731   37947 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:37:07.829777   37947 ssh_runner.go:195] Run: ls
	I0815 17:37:07.834816   37947 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:37:07.838956   37947 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:37:07.838973   37947 status.go:422] ha-683878 apiserver status = Running (err=<nil>)
	I0815 17:37:07.838981   37947 status.go:257] ha-683878 status: &{Name:ha-683878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:37:07.838995   37947 status.go:255] checking status of ha-683878-m02 ...
	I0815 17:37:07.839267   37947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:07.839295   37947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:07.854566   37947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I0815 17:37:07.854984   37947 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:07.855452   37947 main.go:141] libmachine: Using API Version  1
	I0815 17:37:07.855473   37947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:07.855756   37947 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:07.855927   37947 main.go:141] libmachine: (ha-683878-m02) Calling .GetState
	I0815 17:37:07.996231   37947 status.go:330] ha-683878-m02 host status = "Stopped" (err=<nil>)
	I0815 17:37:07.996254   37947 status.go:343] host is not running, skipping remaining checks
	I0815 17:37:07.996260   37947 status.go:257] ha-683878-m02 status: &{Name:ha-683878-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:37:07.996276   37947 status.go:255] checking status of ha-683878-m03 ...
	I0815 17:37:07.996593   37947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:07.996634   37947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:08.013459   37947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0815 17:37:08.013844   37947 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:08.014308   37947 main.go:141] libmachine: Using API Version  1
	I0815 17:37:08.014339   37947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:08.014634   37947 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:08.014826   37947 main.go:141] libmachine: (ha-683878-m03) Calling .GetState
	I0815 17:37:08.016507   37947 status.go:330] ha-683878-m03 host status = "Running" (err=<nil>)
	I0815 17:37:08.016525   37947 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:37:08.016929   37947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:08.017000   37947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:08.031578   37947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38965
	I0815 17:37:08.031946   37947 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:08.032386   37947 main.go:141] libmachine: Using API Version  1
	I0815 17:37:08.032422   37947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:08.032749   37947 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:08.032952   37947 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:37:08.035358   37947 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:08.035830   37947 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:37:08.035851   37947 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:08.036002   37947 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:37:08.036366   37947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:08.036411   37947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:08.050272   37947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45763
	I0815 17:37:08.050726   37947 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:08.051188   37947 main.go:141] libmachine: Using API Version  1
	I0815 17:37:08.051205   37947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:08.051480   37947 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:08.051642   37947 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:37:08.051947   37947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:37:08.051977   37947 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:37:08.054698   37947 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:08.055274   37947 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:37:08.055294   37947 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:08.055437   37947 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:37:08.055595   37947 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:37:08.055763   37947 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:37:08.055909   37947 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:37:08.132097   37947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:37:08.148816   37947 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:37:08.148841   37947 api_server.go:166] Checking apiserver status ...
	I0815 17:37:08.148867   37947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:37:08.163142   37947 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup
	W0815 17:37:08.172424   37947 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:37:08.172473   37947 ssh_runner.go:195] Run: ls
	I0815 17:37:08.176646   37947 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:37:08.182589   37947 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:37:08.182611   37947 status.go:422] ha-683878-m03 apiserver status = Running (err=<nil>)
	I0815 17:37:08.182622   37947 status.go:257] ha-683878-m03 status: &{Name:ha-683878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:37:08.182641   37947 status.go:255] checking status of ha-683878-m04 ...
	I0815 17:37:08.183017   37947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:08.183073   37947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:08.198291   37947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44519
	I0815 17:37:08.198690   37947 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:08.199171   37947 main.go:141] libmachine: Using API Version  1
	I0815 17:37:08.199194   37947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:08.199497   37947 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:08.199676   37947 main.go:141] libmachine: (ha-683878-m04) Calling .GetState
	I0815 17:37:08.201138   37947 status.go:330] ha-683878-m04 host status = "Running" (err=<nil>)
	I0815 17:37:08.201159   37947 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:37:08.201542   37947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:08.201583   37947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:08.216508   37947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44261
	I0815 17:37:08.216874   37947 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:08.217314   37947 main.go:141] libmachine: Using API Version  1
	I0815 17:37:08.217333   37947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:08.217680   37947 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:08.217827   37947 main.go:141] libmachine: (ha-683878-m04) Calling .GetIP
	I0815 17:37:08.220371   37947 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:08.220734   37947 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:37:08.220769   37947 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:08.220887   37947 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:37:08.221197   37947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:08.221236   37947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:08.235335   37947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46373
	I0815 17:37:08.235682   37947 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:08.236093   37947 main.go:141] libmachine: Using API Version  1
	I0815 17:37:08.236112   37947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:08.236437   37947 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:08.236627   37947 main.go:141] libmachine: (ha-683878-m04) Calling .DriverName
	I0815 17:37:08.236807   37947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:37:08.236823   37947 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHHostname
	I0815 17:37:08.239286   37947 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:08.239712   37947 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:37:08.239734   37947 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:08.239892   37947 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHPort
	I0815 17:37:08.240059   37947 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHKeyPath
	I0815 17:37:08.240224   37947 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHUsername
	I0815 17:37:08.240370   37947 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m04/id_rsa Username:docker}
	I0815 17:37:08.324145   37947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:37:08.338224   37947 status.go:257] ha-683878-m04 status: &{Name:ha-683878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr: exit status 7 (605.631726ms)

                                                
                                                
-- stdout --
	ha-683878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-683878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:37:17.343571   38061 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:37:17.343667   38061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:37:17.343674   38061 out.go:358] Setting ErrFile to fd 2...
	I0815 17:37:17.343678   38061 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:37:17.343868   38061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:37:17.344012   38061 out.go:352] Setting JSON to false
	I0815 17:37:17.344035   38061 mustload.go:65] Loading cluster: ha-683878
	I0815 17:37:17.344062   38061 notify.go:220] Checking for updates...
	I0815 17:37:17.344371   38061 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:37:17.344384   38061 status.go:255] checking status of ha-683878 ...
	I0815 17:37:17.344814   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:17.344859   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:17.363721   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42201
	I0815 17:37:17.364139   38061 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:17.364717   38061 main.go:141] libmachine: Using API Version  1
	I0815 17:37:17.364738   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:17.365098   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:17.365302   38061 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:37:17.366785   38061 status.go:330] ha-683878 host status = "Running" (err=<nil>)
	I0815 17:37:17.366803   38061 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:37:17.367112   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:17.367147   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:17.382129   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41433
	I0815 17:37:17.382499   38061 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:17.382892   38061 main.go:141] libmachine: Using API Version  1
	I0815 17:37:17.382931   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:17.383363   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:17.383589   38061 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:37:17.386575   38061 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:37:17.386988   38061 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:37:17.387017   38061 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:37:17.387221   38061 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:37:17.387504   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:17.387543   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:17.401886   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0815 17:37:17.402286   38061 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:17.402755   38061 main.go:141] libmachine: Using API Version  1
	I0815 17:37:17.402775   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:17.403072   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:17.403386   38061 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:37:17.403562   38061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:37:17.403595   38061 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:37:17.406535   38061 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:37:17.406997   38061 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:37:17.407028   38061 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:37:17.407156   38061 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:37:17.407324   38061 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:37:17.407457   38061 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:37:17.407611   38061 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:37:17.487782   38061 ssh_runner.go:195] Run: systemctl --version
	I0815 17:37:17.493871   38061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:37:17.509635   38061 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:37:17.509663   38061 api_server.go:166] Checking apiserver status ...
	I0815 17:37:17.509698   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:37:17.526565   38061 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup
	W0815 17:37:17.538134   38061 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:37:17.538186   38061 ssh_runner.go:195] Run: ls
	I0815 17:37:17.543006   38061 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:37:17.550079   38061 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:37:17.550100   38061 status.go:422] ha-683878 apiserver status = Running (err=<nil>)
	I0815 17:37:17.550109   38061 status.go:257] ha-683878 status: &{Name:ha-683878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:37:17.550122   38061 status.go:255] checking status of ha-683878-m02 ...
	I0815 17:37:17.550388   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:17.550422   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:17.565344   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37479
	I0815 17:37:17.565712   38061 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:17.566127   38061 main.go:141] libmachine: Using API Version  1
	I0815 17:37:17.566145   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:17.566472   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:17.566648   38061 main.go:141] libmachine: (ha-683878-m02) Calling .GetState
	I0815 17:37:17.568119   38061 status.go:330] ha-683878-m02 host status = "Stopped" (err=<nil>)
	I0815 17:37:17.568130   38061 status.go:343] host is not running, skipping remaining checks
	I0815 17:37:17.568135   38061 status.go:257] ha-683878-m02 status: &{Name:ha-683878-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:37:17.568150   38061 status.go:255] checking status of ha-683878-m03 ...
	I0815 17:37:17.568425   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:17.568459   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:17.582489   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41119
	I0815 17:37:17.582905   38061 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:17.583355   38061 main.go:141] libmachine: Using API Version  1
	I0815 17:37:17.583378   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:17.583715   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:17.583897   38061 main.go:141] libmachine: (ha-683878-m03) Calling .GetState
	I0815 17:37:17.585233   38061 status.go:330] ha-683878-m03 host status = "Running" (err=<nil>)
	I0815 17:37:17.585248   38061 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:37:17.585658   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:17.585696   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:17.599410   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33555
	I0815 17:37:17.599779   38061 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:17.600277   38061 main.go:141] libmachine: Using API Version  1
	I0815 17:37:17.600301   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:17.600602   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:17.600798   38061 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:37:17.603440   38061 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:17.603846   38061 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:37:17.603877   38061 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:17.604047   38061 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:37:17.604350   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:17.604379   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:17.618663   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35413
	I0815 17:37:17.619067   38061 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:17.619473   38061 main.go:141] libmachine: Using API Version  1
	I0815 17:37:17.619487   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:17.619858   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:17.620049   38061 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:37:17.620233   38061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:37:17.620253   38061 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:37:17.622814   38061 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:17.623303   38061 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:37:17.623337   38061 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:17.623472   38061 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:37:17.623660   38061 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:37:17.623793   38061 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:37:17.623915   38061 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:37:17.703656   38061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:37:17.724638   38061 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:37:17.724662   38061 api_server.go:166] Checking apiserver status ...
	I0815 17:37:17.724701   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:37:17.737857   38061 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup
	W0815 17:37:17.747866   38061 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:37:17.747924   38061 ssh_runner.go:195] Run: ls
	I0815 17:37:17.752250   38061 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:37:17.756313   38061 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:37:17.756334   38061 status.go:422] ha-683878-m03 apiserver status = Running (err=<nil>)
	I0815 17:37:17.756343   38061 status.go:257] ha-683878-m03 status: &{Name:ha-683878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:37:17.756359   38061 status.go:255] checking status of ha-683878-m04 ...
	I0815 17:37:17.756741   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:17.756775   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:17.771351   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40833
	I0815 17:37:17.771698   38061 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:17.772104   38061 main.go:141] libmachine: Using API Version  1
	I0815 17:37:17.772121   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:17.772419   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:17.772627   38061 main.go:141] libmachine: (ha-683878-m04) Calling .GetState
	I0815 17:37:17.774028   38061 status.go:330] ha-683878-m04 host status = "Running" (err=<nil>)
	I0815 17:37:17.774042   38061 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:37:17.774304   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:17.774333   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:17.788882   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43851
	I0815 17:37:17.789210   38061 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:17.789651   38061 main.go:141] libmachine: Using API Version  1
	I0815 17:37:17.789673   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:17.789956   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:17.790137   38061 main.go:141] libmachine: (ha-683878-m04) Calling .GetIP
	I0815 17:37:17.792782   38061 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:17.793199   38061 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:37:17.793240   38061 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:17.793365   38061 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:37:17.793641   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:17.793674   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:17.808667   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37247
	I0815 17:37:17.809092   38061 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:17.809491   38061 main.go:141] libmachine: Using API Version  1
	I0815 17:37:17.809514   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:17.809848   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:17.810049   38061 main.go:141] libmachine: (ha-683878-m04) Calling .DriverName
	I0815 17:37:17.810256   38061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:37:17.810277   38061 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHHostname
	I0815 17:37:17.812841   38061 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:17.813171   38061 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:37:17.813195   38061 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:17.813325   38061 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHPort
	I0815 17:37:17.813481   38061 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHKeyPath
	I0815 17:37:17.813632   38061 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHUsername
	I0815 17:37:17.813740   38061 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m04/id_rsa Username:docker}
	I0815 17:37:17.892000   38061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:37:17.907708   38061 status.go:257] ha-683878-m04 status: &{Name:ha-683878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr: exit status 7 (618.364225ms)

                                                
                                                
-- stdout --
	ha-683878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-683878-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:37:27.017653   38166 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:37:27.017773   38166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:37:27.017782   38166 out.go:358] Setting ErrFile to fd 2...
	I0815 17:37:27.017786   38166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:37:27.017993   38166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:37:27.018180   38166 out.go:352] Setting JSON to false
	I0815 17:37:27.018206   38166 mustload.go:65] Loading cluster: ha-683878
	I0815 17:37:27.018238   38166 notify.go:220] Checking for updates...
	I0815 17:37:27.018606   38166 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:37:27.018621   38166 status.go:255] checking status of ha-683878 ...
	I0815 17:37:27.019006   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:27.019070   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:27.034667   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38051
	I0815 17:37:27.035072   38166 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:27.035593   38166 main.go:141] libmachine: Using API Version  1
	I0815 17:37:27.035616   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:27.036098   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:27.036299   38166 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:37:27.037951   38166 status.go:330] ha-683878 host status = "Running" (err=<nil>)
	I0815 17:37:27.037969   38166 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:37:27.038265   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:27.038304   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:27.054440   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42111
	I0815 17:37:27.054888   38166 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:27.055447   38166 main.go:141] libmachine: Using API Version  1
	I0815 17:37:27.055473   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:27.055759   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:27.055912   38166 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:37:27.058232   38166 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:37:27.058613   38166 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:37:27.058633   38166 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:37:27.058768   38166 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:37:27.059057   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:27.059106   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:27.073379   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46349
	I0815 17:37:27.073796   38166 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:27.074262   38166 main.go:141] libmachine: Using API Version  1
	I0815 17:37:27.074284   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:27.074573   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:27.074725   38166 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:37:27.074894   38166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:37:27.074922   38166 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:37:27.077980   38166 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:37:27.078471   38166 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:37:27.078526   38166 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:37:27.078684   38166 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:37:27.078849   38166 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:37:27.079005   38166 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:37:27.079163   38166 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:37:27.169021   38166 ssh_runner.go:195] Run: systemctl --version
	I0815 17:37:27.175547   38166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:37:27.191203   38166 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:37:27.191233   38166 api_server.go:166] Checking apiserver status ...
	I0815 17:37:27.191263   38166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:37:27.205330   38166 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup
	W0815 17:37:27.215122   38166 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:37:27.215175   38166 ssh_runner.go:195] Run: ls
	I0815 17:37:27.223042   38166 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:37:27.227758   38166 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:37:27.227779   38166 status.go:422] ha-683878 apiserver status = Running (err=<nil>)
	I0815 17:37:27.227797   38166 status.go:257] ha-683878 status: &{Name:ha-683878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:37:27.227815   38166 status.go:255] checking status of ha-683878-m02 ...
	I0815 17:37:27.228125   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:27.228165   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:27.242813   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44707
	I0815 17:37:27.243280   38166 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:27.243796   38166 main.go:141] libmachine: Using API Version  1
	I0815 17:37:27.243822   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:27.244115   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:27.244291   38166 main.go:141] libmachine: (ha-683878-m02) Calling .GetState
	I0815 17:37:27.245948   38166 status.go:330] ha-683878-m02 host status = "Stopped" (err=<nil>)
	I0815 17:37:27.245965   38166 status.go:343] host is not running, skipping remaining checks
	I0815 17:37:27.245973   38166 status.go:257] ha-683878-m02 status: &{Name:ha-683878-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:37:27.245997   38166 status.go:255] checking status of ha-683878-m03 ...
	I0815 17:37:27.246288   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:27.246352   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:27.262085   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41073
	I0815 17:37:27.262575   38166 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:27.263031   38166 main.go:141] libmachine: Using API Version  1
	I0815 17:37:27.263053   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:27.263340   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:27.263521   38166 main.go:141] libmachine: (ha-683878-m03) Calling .GetState
	I0815 17:37:27.264981   38166 status.go:330] ha-683878-m03 host status = "Running" (err=<nil>)
	I0815 17:37:27.264998   38166 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:37:27.265277   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:27.265311   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:27.279615   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35399
	I0815 17:37:27.280010   38166 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:27.280446   38166 main.go:141] libmachine: Using API Version  1
	I0815 17:37:27.280468   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:27.280767   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:27.280916   38166 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:37:27.283561   38166 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:27.283924   38166 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:37:27.283948   38166 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:27.284084   38166 host.go:66] Checking if "ha-683878-m03" exists ...
	I0815 17:37:27.284426   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:27.284465   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:27.298947   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40937
	I0815 17:37:27.299406   38166 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:27.299899   38166 main.go:141] libmachine: Using API Version  1
	I0815 17:37:27.299918   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:27.300176   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:27.300340   38166 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:37:27.300533   38166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:37:27.300551   38166 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:37:27.303235   38166 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:27.303683   38166 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:37:27.303711   38166 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:27.303846   38166 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:37:27.304019   38166 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:37:27.304161   38166 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:37:27.304285   38166 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:37:27.388509   38166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:37:27.406340   38166 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:37:27.406372   38166 api_server.go:166] Checking apiserver status ...
	I0815 17:37:27.406420   38166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:37:27.420114   38166 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup
	W0815 17:37:27.430133   38166 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:37:27.430194   38166 ssh_runner.go:195] Run: ls
	I0815 17:37:27.434532   38166 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:37:27.438838   38166 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:37:27.438861   38166 status.go:422] ha-683878-m03 apiserver status = Running (err=<nil>)
	I0815 17:37:27.438872   38166 status.go:257] ha-683878-m03 status: &{Name:ha-683878-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:37:27.438890   38166 status.go:255] checking status of ha-683878-m04 ...
	I0815 17:37:27.439171   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:27.439220   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:27.454627   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36633
	I0815 17:37:27.454967   38166 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:27.455596   38166 main.go:141] libmachine: Using API Version  1
	I0815 17:37:27.455613   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:27.455909   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:27.456101   38166 main.go:141] libmachine: (ha-683878-m04) Calling .GetState
	I0815 17:37:27.457725   38166 status.go:330] ha-683878-m04 host status = "Running" (err=<nil>)
	I0815 17:37:27.457761   38166 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:37:27.458016   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:27.458046   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:27.472027   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41187
	I0815 17:37:27.472387   38166 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:27.472874   38166 main.go:141] libmachine: Using API Version  1
	I0815 17:37:27.472895   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:27.473198   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:27.473379   38166 main.go:141] libmachine: (ha-683878-m04) Calling .GetIP
	I0815 17:37:27.476068   38166 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:27.476429   38166 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:37:27.476450   38166 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:27.476638   38166 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:37:27.477051   38166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:27.477103   38166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:27.491435   38166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37823
	I0815 17:37:27.491790   38166 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:27.492242   38166 main.go:141] libmachine: Using API Version  1
	I0815 17:37:27.492261   38166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:27.492542   38166 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:27.492728   38166 main.go:141] libmachine: (ha-683878-m04) Calling .DriverName
	I0815 17:37:27.492884   38166 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:37:27.492906   38166 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHHostname
	I0815 17:37:27.495680   38166 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:27.496159   38166 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:37:27.496185   38166 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:27.496337   38166 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHPort
	I0815 17:37:27.496512   38166 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHKeyPath
	I0815 17:37:27.496672   38166 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHUsername
	I0815 17:37:27.496827   38166 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m04/id_rsa Username:docker}
	I0815 17:37:27.580142   38166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:37:27.594396   38166 status.go:257] ha-683878-m04 status: &{Name:ha-683878-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-683878 -n ha-683878
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-683878 logs -n 25: (1.352717222s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878:/home/docker/cp-test_ha-683878-m03_ha-683878.txt                       |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878 sudo cat                                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m03_ha-683878.txt                                 |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m02:/home/docker/cp-test_ha-683878-m03_ha-683878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m02 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m03_ha-683878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04:/home/docker/cp-test_ha-683878-m03_ha-683878-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m04 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m03_ha-683878-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-683878 cp testdata/cp-test.txt                                                | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3030958127/001/cp-test_ha-683878-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878:/home/docker/cp-test_ha-683878-m04_ha-683878.txt                       |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878 sudo cat                                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m04_ha-683878.txt                                 |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m02:/home/docker/cp-test_ha-683878-m04_ha-683878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m02 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m04_ha-683878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03:/home/docker/cp-test_ha-683878-m04_ha-683878-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m03 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m04_ha-683878-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-683878 node stop m02 -v=7                                                     | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-683878 node start m02 -v=7                                                    | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:36 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:28:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:28:34.800374   32399 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:28:34.800479   32399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:28:34.800504   32399 out.go:358] Setting ErrFile to fd 2...
	I0815 17:28:34.800512   32399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:28:34.800695   32399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:28:34.801271   32399 out.go:352] Setting JSON to false
	I0815 17:28:34.802107   32399 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4261,"bootTime":1723738654,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:28:34.802164   32399 start.go:139] virtualization: kvm guest
	I0815 17:28:34.804236   32399 out.go:177] * [ha-683878] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:28:34.805491   32399 notify.go:220] Checking for updates...
	I0815 17:28:34.805523   32399 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:28:34.806921   32399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:28:34.808443   32399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:28:34.809727   32399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:28:34.810839   32399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:28:34.811973   32399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:28:34.813220   32399 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:28:34.849062   32399 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 17:28:34.850087   32399 start.go:297] selected driver: kvm2
	I0815 17:28:34.850100   32399 start.go:901] validating driver "kvm2" against <nil>
	I0815 17:28:34.850111   32399 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:28:34.850761   32399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:28:34.850838   32399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 17:28:34.865056   32399 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 17:28:34.865108   32399 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:28:34.865309   32399 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:28:34.865370   32399 cni.go:84] Creating CNI manager for ""
	I0815 17:28:34.865382   32399 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0815 17:28:34.865390   32399 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 17:28:34.865439   32399 start.go:340] cluster config:
	{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0815 17:28:34.865525   32399 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:28:34.867162   32399 out.go:177] * Starting "ha-683878" primary control-plane node in "ha-683878" cluster
	I0815 17:28:34.868155   32399 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:28:34.868196   32399 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:28:34.868209   32399 cache.go:56] Caching tarball of preloaded images
	I0815 17:28:34.868281   32399 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:28:34.868295   32399 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:28:34.868647   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:28:34.868671   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json: {Name:mk42d1859c56aeb2f4ea506a56543ef14b895257 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:28:34.868838   32399 start.go:360] acquireMachinesLock for ha-683878: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:28:34.868878   32399 start.go:364] duration metric: took 24.715µs to acquireMachinesLock for "ha-683878"
	I0815 17:28:34.868902   32399 start.go:93] Provisioning new machine with config: &{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:28:34.868992   32399 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 17:28:34.870549   32399 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:28:34.870682   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:28:34.870724   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:28:34.884647   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41983
	I0815 17:28:34.885062   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:28:34.885643   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:28:34.885667   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:28:34.885948   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:28:34.886145   32399 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:28:34.886300   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:34.886445   32399 start.go:159] libmachine.API.Create for "ha-683878" (driver="kvm2")
	I0815 17:28:34.886482   32399 client.go:168] LocalClient.Create starting
	I0815 17:28:34.886520   32399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem
	I0815 17:28:34.886562   32399 main.go:141] libmachine: Decoding PEM data...
	I0815 17:28:34.886575   32399 main.go:141] libmachine: Parsing certificate...
	I0815 17:28:34.886628   32399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem
	I0815 17:28:34.886646   32399 main.go:141] libmachine: Decoding PEM data...
	I0815 17:28:34.886657   32399 main.go:141] libmachine: Parsing certificate...
	I0815 17:28:34.886678   32399 main.go:141] libmachine: Running pre-create checks...
	I0815 17:28:34.886685   32399 main.go:141] libmachine: (ha-683878) Calling .PreCreateCheck
	I0815 17:28:34.886994   32399 main.go:141] libmachine: (ha-683878) Calling .GetConfigRaw
	I0815 17:28:34.887356   32399 main.go:141] libmachine: Creating machine...
	I0815 17:28:34.887372   32399 main.go:141] libmachine: (ha-683878) Calling .Create
	I0815 17:28:34.887511   32399 main.go:141] libmachine: (ha-683878) Creating KVM machine...
	I0815 17:28:34.888649   32399 main.go:141] libmachine: (ha-683878) DBG | found existing default KVM network
	I0815 17:28:34.889478   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:34.889336   32422 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0815 17:28:34.889535   32399 main.go:141] libmachine: (ha-683878) DBG | created network xml: 
	I0815 17:28:34.889551   32399 main.go:141] libmachine: (ha-683878) DBG | <network>
	I0815 17:28:34.889561   32399 main.go:141] libmachine: (ha-683878) DBG |   <name>mk-ha-683878</name>
	I0815 17:28:34.889575   32399 main.go:141] libmachine: (ha-683878) DBG |   <dns enable='no'/>
	I0815 17:28:34.889587   32399 main.go:141] libmachine: (ha-683878) DBG |   
	I0815 17:28:34.889603   32399 main.go:141] libmachine: (ha-683878) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 17:28:34.889615   32399 main.go:141] libmachine: (ha-683878) DBG |     <dhcp>
	I0815 17:28:34.889623   32399 main.go:141] libmachine: (ha-683878) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 17:28:34.889633   32399 main.go:141] libmachine: (ha-683878) DBG |     </dhcp>
	I0815 17:28:34.889648   32399 main.go:141] libmachine: (ha-683878) DBG |   </ip>
	I0815 17:28:34.889660   32399 main.go:141] libmachine: (ha-683878) DBG |   
	I0815 17:28:34.889674   32399 main.go:141] libmachine: (ha-683878) DBG | </network>
	I0815 17:28:34.889687   32399 main.go:141] libmachine: (ha-683878) DBG | 
	I0815 17:28:34.894456   32399 main.go:141] libmachine: (ha-683878) DBG | trying to create private KVM network mk-ha-683878 192.168.39.0/24...
	I0815 17:28:34.954565   32399 main.go:141] libmachine: (ha-683878) DBG | private KVM network mk-ha-683878 192.168.39.0/24 created
	I0815 17:28:34.954594   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:34.954547   32422 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:28:34.954606   32399 main.go:141] libmachine: (ha-683878) Setting up store path in /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878 ...
	I0815 17:28:34.954623   32399 main.go:141] libmachine: (ha-683878) Building disk image from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 17:28:34.954688   32399 main.go:141] libmachine: (ha-683878) Downloading /home/jenkins/minikube-integration/19450-13013/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 17:28:35.191456   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:35.191322   32422 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa...
	I0815 17:28:35.362236   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:35.362134   32422 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/ha-683878.rawdisk...
	I0815 17:28:35.362262   32399 main.go:141] libmachine: (ha-683878) DBG | Writing magic tar header
	I0815 17:28:35.362271   32399 main.go:141] libmachine: (ha-683878) DBG | Writing SSH key tar header
	I0815 17:28:35.362281   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:35.362253   32422 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878 ...
	I0815 17:28:35.362386   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878
	I0815 17:28:35.362412   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines
	I0815 17:28:35.362419   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:28:35.362431   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013
	I0815 17:28:35.362442   32399 main.go:141] libmachine: (ha-683878) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878 (perms=drwx------)
	I0815 17:28:35.362475   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 17:28:35.362486   32399 main.go:141] libmachine: (ha-683878) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines (perms=drwxr-xr-x)
	I0815 17:28:35.362494   32399 main.go:141] libmachine: (ha-683878) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube (perms=drwxr-xr-x)
	I0815 17:28:35.362503   32399 main.go:141] libmachine: (ha-683878) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013 (perms=drwxrwxr-x)
	I0815 17:28:35.362513   32399 main.go:141] libmachine: (ha-683878) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 17:28:35.362525   32399 main.go:141] libmachine: (ha-683878) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 17:28:35.362537   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home/jenkins
	I0815 17:28:35.362547   32399 main.go:141] libmachine: (ha-683878) Creating domain...
	I0815 17:28:35.362555   32399 main.go:141] libmachine: (ha-683878) DBG | Checking permissions on dir: /home
	I0815 17:28:35.362569   32399 main.go:141] libmachine: (ha-683878) DBG | Skipping /home - not owner
	I0815 17:28:35.363677   32399 main.go:141] libmachine: (ha-683878) define libvirt domain using xml: 
	I0815 17:28:35.363695   32399 main.go:141] libmachine: (ha-683878) <domain type='kvm'>
	I0815 17:28:35.363702   32399 main.go:141] libmachine: (ha-683878)   <name>ha-683878</name>
	I0815 17:28:35.363709   32399 main.go:141] libmachine: (ha-683878)   <memory unit='MiB'>2200</memory>
	I0815 17:28:35.363715   32399 main.go:141] libmachine: (ha-683878)   <vcpu>2</vcpu>
	I0815 17:28:35.363722   32399 main.go:141] libmachine: (ha-683878)   <features>
	I0815 17:28:35.363727   32399 main.go:141] libmachine: (ha-683878)     <acpi/>
	I0815 17:28:35.363732   32399 main.go:141] libmachine: (ha-683878)     <apic/>
	I0815 17:28:35.363739   32399 main.go:141] libmachine: (ha-683878)     <pae/>
	I0815 17:28:35.363750   32399 main.go:141] libmachine: (ha-683878)     
	I0815 17:28:35.363759   32399 main.go:141] libmachine: (ha-683878)   </features>
	I0815 17:28:35.363769   32399 main.go:141] libmachine: (ha-683878)   <cpu mode='host-passthrough'>
	I0815 17:28:35.363775   32399 main.go:141] libmachine: (ha-683878)   
	I0815 17:28:35.363778   32399 main.go:141] libmachine: (ha-683878)   </cpu>
	I0815 17:28:35.363785   32399 main.go:141] libmachine: (ha-683878)   <os>
	I0815 17:28:35.363795   32399 main.go:141] libmachine: (ha-683878)     <type>hvm</type>
	I0815 17:28:35.363800   32399 main.go:141] libmachine: (ha-683878)     <boot dev='cdrom'/>
	I0815 17:28:35.363807   32399 main.go:141] libmachine: (ha-683878)     <boot dev='hd'/>
	I0815 17:28:35.363812   32399 main.go:141] libmachine: (ha-683878)     <bootmenu enable='no'/>
	I0815 17:28:35.363816   32399 main.go:141] libmachine: (ha-683878)   </os>
	I0815 17:28:35.363823   32399 main.go:141] libmachine: (ha-683878)   <devices>
	I0815 17:28:35.363834   32399 main.go:141] libmachine: (ha-683878)     <disk type='file' device='cdrom'>
	I0815 17:28:35.363854   32399 main.go:141] libmachine: (ha-683878)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/boot2docker.iso'/>
	I0815 17:28:35.363867   32399 main.go:141] libmachine: (ha-683878)       <target dev='hdc' bus='scsi'/>
	I0815 17:28:35.363874   32399 main.go:141] libmachine: (ha-683878)       <readonly/>
	I0815 17:28:35.363878   32399 main.go:141] libmachine: (ha-683878)     </disk>
	I0815 17:28:35.363886   32399 main.go:141] libmachine: (ha-683878)     <disk type='file' device='disk'>
	I0815 17:28:35.363892   32399 main.go:141] libmachine: (ha-683878)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 17:28:35.363902   32399 main.go:141] libmachine: (ha-683878)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/ha-683878.rawdisk'/>
	I0815 17:28:35.363909   32399 main.go:141] libmachine: (ha-683878)       <target dev='hda' bus='virtio'/>
	I0815 17:28:35.363920   32399 main.go:141] libmachine: (ha-683878)     </disk>
	I0815 17:28:35.363929   32399 main.go:141] libmachine: (ha-683878)     <interface type='network'>
	I0815 17:28:35.363940   32399 main.go:141] libmachine: (ha-683878)       <source network='mk-ha-683878'/>
	I0815 17:28:35.363957   32399 main.go:141] libmachine: (ha-683878)       <model type='virtio'/>
	I0815 17:28:35.363970   32399 main.go:141] libmachine: (ha-683878)     </interface>
	I0815 17:28:35.363979   32399 main.go:141] libmachine: (ha-683878)     <interface type='network'>
	I0815 17:28:35.363991   32399 main.go:141] libmachine: (ha-683878)       <source network='default'/>
	I0815 17:28:35.364003   32399 main.go:141] libmachine: (ha-683878)       <model type='virtio'/>
	I0815 17:28:35.364011   32399 main.go:141] libmachine: (ha-683878)     </interface>
	I0815 17:28:35.364023   32399 main.go:141] libmachine: (ha-683878)     <serial type='pty'>
	I0815 17:28:35.364033   32399 main.go:141] libmachine: (ha-683878)       <target port='0'/>
	I0815 17:28:35.364041   32399 main.go:141] libmachine: (ha-683878)     </serial>
	I0815 17:28:35.364050   32399 main.go:141] libmachine: (ha-683878)     <console type='pty'>
	I0815 17:28:35.364060   32399 main.go:141] libmachine: (ha-683878)       <target type='serial' port='0'/>
	I0815 17:28:35.364076   32399 main.go:141] libmachine: (ha-683878)     </console>
	I0815 17:28:35.364106   32399 main.go:141] libmachine: (ha-683878)     <rng model='virtio'>
	I0815 17:28:35.364131   32399 main.go:141] libmachine: (ha-683878)       <backend model='random'>/dev/random</backend>
	I0815 17:28:35.364140   32399 main.go:141] libmachine: (ha-683878)     </rng>
	I0815 17:28:35.364151   32399 main.go:141] libmachine: (ha-683878)     
	I0815 17:28:35.364162   32399 main.go:141] libmachine: (ha-683878)     
	I0815 17:28:35.364173   32399 main.go:141] libmachine: (ha-683878)   </devices>
	I0815 17:28:35.364182   32399 main.go:141] libmachine: (ha-683878) </domain>
	I0815 17:28:35.364197   32399 main.go:141] libmachine: (ha-683878) 
	I0815 17:28:35.368264   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:41:65:82 in network default
	I0815 17:28:35.368736   32399 main.go:141] libmachine: (ha-683878) Ensuring networks are active...
	I0815 17:28:35.368759   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:35.369370   32399 main.go:141] libmachine: (ha-683878) Ensuring network default is active
	I0815 17:28:35.369656   32399 main.go:141] libmachine: (ha-683878) Ensuring network mk-ha-683878 is active
	I0815 17:28:35.370074   32399 main.go:141] libmachine: (ha-683878) Getting domain xml...
	I0815 17:28:35.370689   32399 main.go:141] libmachine: (ha-683878) Creating domain...
	I0815 17:28:36.535163   32399 main.go:141] libmachine: (ha-683878) Waiting to get IP...
	I0815 17:28:36.535871   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:36.536220   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:36.536261   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:36.536212   32422 retry.go:31] will retry after 215.159557ms: waiting for machine to come up
	I0815 17:28:36.752670   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:36.753187   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:36.753215   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:36.753136   32422 retry.go:31] will retry after 278.070607ms: waiting for machine to come up
	I0815 17:28:37.032729   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:37.033223   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:37.033252   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:37.033186   32422 retry.go:31] will retry after 302.870993ms: waiting for machine to come up
	I0815 17:28:37.337510   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:37.337962   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:37.337990   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:37.337907   32422 retry.go:31] will retry after 475.34796ms: waiting for machine to come up
	I0815 17:28:37.814459   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:37.814892   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:37.814920   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:37.814839   32422 retry.go:31] will retry after 512.676016ms: waiting for machine to come up
	I0815 17:28:38.329532   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:38.329864   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:38.329893   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:38.329818   32422 retry.go:31] will retry after 622.237179ms: waiting for machine to come up
	I0815 17:28:38.953579   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:38.953931   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:38.953971   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:38.953895   32422 retry.go:31] will retry after 794.455757ms: waiting for machine to come up
	I0815 17:28:39.749652   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:39.750014   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:39.750039   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:39.749964   32422 retry.go:31] will retry after 1.306931639s: waiting for machine to come up
	I0815 17:28:41.058790   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:41.059117   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:41.059146   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:41.059062   32422 retry.go:31] will retry after 1.852585502s: waiting for machine to come up
	I0815 17:28:42.913929   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:42.914161   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:42.914188   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:42.914122   32422 retry.go:31] will retry after 2.102645836s: waiting for machine to come up
	I0815 17:28:45.018326   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:45.018830   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:45.018858   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:45.018780   32422 retry.go:31] will retry after 2.568960935s: waiting for machine to come up
	I0815 17:28:47.589452   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:47.589768   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:47.589794   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:47.589735   32422 retry.go:31] will retry after 2.187445497s: waiting for machine to come up
	I0815 17:28:49.778302   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:49.778691   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:49.778720   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:49.778651   32422 retry.go:31] will retry after 2.908424791s: waiting for machine to come up
	I0815 17:28:52.689499   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:52.689792   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find current IP address of domain ha-683878 in network mk-ha-683878
	I0815 17:28:52.689819   32399 main.go:141] libmachine: (ha-683878) DBG | I0815 17:28:52.689733   32422 retry.go:31] will retry after 5.582171457s: waiting for machine to come up
	I0815 17:28:58.276256   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.276721   32399 main.go:141] libmachine: (ha-683878) Found IP for machine: 192.168.39.17
	I0815 17:28:58.276749   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has current primary IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.276759   32399 main.go:141] libmachine: (ha-683878) Reserving static IP address...
	I0815 17:28:58.277038   32399 main.go:141] libmachine: (ha-683878) DBG | unable to find host DHCP lease matching {name: "ha-683878", mac: "52:54:00:fe:4b:82", ip: "192.168.39.17"} in network mk-ha-683878
	I0815 17:28:58.346012   32399 main.go:141] libmachine: (ha-683878) Reserved static IP address: 192.168.39.17
	I0815 17:28:58.346045   32399 main.go:141] libmachine: (ha-683878) Waiting for SSH to be available...
	I0815 17:28:58.346053   32399 main.go:141] libmachine: (ha-683878) DBG | Getting to WaitForSSH function...
	I0815 17:28:58.349018   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.349481   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:minikube Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.349504   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.349659   32399 main.go:141] libmachine: (ha-683878) DBG | Using SSH client type: external
	I0815 17:28:58.349693   32399 main.go:141] libmachine: (ha-683878) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa (-rw-------)
	I0815 17:28:58.349761   32399 main.go:141] libmachine: (ha-683878) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.17 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 17:28:58.349791   32399 main.go:141] libmachine: (ha-683878) DBG | About to run SSH command:
	I0815 17:28:58.349808   32399 main.go:141] libmachine: (ha-683878) DBG | exit 0
	I0815 17:28:58.472261   32399 main.go:141] libmachine: (ha-683878) DBG | SSH cmd err, output: <nil>: 
	I0815 17:28:58.472552   32399 main.go:141] libmachine: (ha-683878) KVM machine creation complete!
	I0815 17:28:58.472835   32399 main.go:141] libmachine: (ha-683878) Calling .GetConfigRaw
	I0815 17:28:58.473309   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:58.473477   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:58.473617   32399 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 17:28:58.473633   32399 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:28:58.474916   32399 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 17:28:58.474936   32399 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 17:28:58.474944   32399 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 17:28:58.474952   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:58.476942   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.477287   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.477310   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.477437   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:58.477612   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.477724   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.477857   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:58.477988   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:28:58.478202   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:28:58.478213   32399 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 17:28:58.575551   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:28:58.575575   32399 main.go:141] libmachine: Detecting the provisioner...
	I0815 17:28:58.575583   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:58.578192   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.578538   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.578565   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.578706   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:58.578890   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.579056   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.579230   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:58.579402   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:28:58.579606   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:28:58.579619   32399 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 17:28:58.681136   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 17:28:58.681242   32399 main.go:141] libmachine: found compatible host: buildroot
	I0815 17:28:58.681252   32399 main.go:141] libmachine: Provisioning with buildroot...
	I0815 17:28:58.681259   32399 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:28:58.681494   32399 buildroot.go:166] provisioning hostname "ha-683878"
	I0815 17:28:58.681518   32399 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:28:58.681725   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:58.684126   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.684515   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.684546   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.684628   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:58.684796   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.684942   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.685046   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:58.685310   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:28:58.685483   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:28:58.685495   32399 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683878 && echo "ha-683878" | sudo tee /etc/hostname
	I0815 17:28:58.804620   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683878
	
	I0815 17:28:58.804650   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:58.807320   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.807700   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.807740   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.807912   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:58.808085   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.808262   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:58.808388   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:58.808568   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:28:58.808754   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:28:58.808779   32399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683878/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:28:58.917934   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:28:58.917967   32399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 17:28:58.918006   32399 buildroot.go:174] setting up certificates
	I0815 17:28:58.918018   32399 provision.go:84] configureAuth start
	I0815 17:28:58.918030   32399 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:28:58.918284   32399 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:28:58.920820   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.921181   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.921206   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.921272   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:58.923106   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.923501   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:58.923522   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:58.923681   32399 provision.go:143] copyHostCerts
	I0815 17:28:58.923721   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:28:58.923779   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 17:28:58.923794   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:28:58.923861   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 17:28:58.923944   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:28:58.923961   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 17:28:58.923968   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:28:58.923992   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 17:28:58.924044   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:28:58.924061   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 17:28:58.924067   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:28:58.924121   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 17:28:58.924183   32399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.ha-683878 san=[127.0.0.1 192.168.39.17 ha-683878 localhost minikube]
	I0815 17:28:59.216173   32399 provision.go:177] copyRemoteCerts
	I0815 17:28:59.216225   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:28:59.216247   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:59.218649   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.218925   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.218952   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.219116   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:59.219296   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.219540   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:59.219697   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:28:59.303096   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 17:28:59.303174   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 17:28:59.329729   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 17:28:59.329803   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0815 17:28:59.352653   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 17:28:59.352731   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 17:28:59.383980   32399 provision.go:87] duration metric: took 465.94572ms to configureAuth
	I0815 17:28:59.384005   32399 buildroot.go:189] setting minikube options for container-runtime
	I0815 17:28:59.384227   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:28:59.384320   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:59.386956   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.387346   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.387380   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.387537   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:59.387712   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.387845   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.387999   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:59.388182   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:28:59.388386   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:28:59.388406   32399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:28:59.667257   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:28:59.667281   32399 main.go:141] libmachine: Checking connection to Docker...
	I0815 17:28:59.667292   32399 main.go:141] libmachine: (ha-683878) Calling .GetURL
	I0815 17:28:59.668468   32399 main.go:141] libmachine: (ha-683878) DBG | Using libvirt version 6000000
	I0815 17:28:59.670585   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.670944   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.670982   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.671050   32399 main.go:141] libmachine: Docker is up and running!
	I0815 17:28:59.671060   32399 main.go:141] libmachine: Reticulating splines...
	I0815 17:28:59.671066   32399 client.go:171] duration metric: took 24.784574398s to LocalClient.Create
	I0815 17:28:59.671089   32399 start.go:167] duration metric: took 24.784644393s to libmachine.API.Create "ha-683878"
	I0815 17:28:59.671101   32399 start.go:293] postStartSetup for "ha-683878" (driver="kvm2")
	I0815 17:28:59.671120   32399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:28:59.671137   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:59.671378   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:28:59.671405   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:59.673342   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.673625   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.673652   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.673778   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:59.673975   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.674150   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:59.674440   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:28:59.755301   32399 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:28:59.759393   32399 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 17:28:59.759426   32399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 17:28:59.759487   32399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 17:28:59.759563   32399 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 17:28:59.759572   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /etc/ssl/certs/202192.pem
	I0815 17:28:59.759660   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:28:59.768798   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:28:59.791446   32399 start.go:296] duration metric: took 120.325971ms for postStartSetup
	I0815 17:28:59.791485   32399 main.go:141] libmachine: (ha-683878) Calling .GetConfigRaw
	I0815 17:28:59.792035   32399 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:28:59.794600   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.794943   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.794970   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.795198   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:28:59.795393   32399 start.go:128] duration metric: took 24.926390331s to createHost
	I0815 17:28:59.795424   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:59.797977   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.798326   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.798361   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.798514   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:59.798686   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.798885   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.799109   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:59.799301   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:28:59.799459   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:28:59.799474   32399 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 17:28:59.901035   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723742939.879404272
	
	I0815 17:28:59.901058   32399 fix.go:216] guest clock: 1723742939.879404272
	I0815 17:28:59.901066   32399 fix.go:229] Guest: 2024-08-15 17:28:59.879404272 +0000 UTC Remote: 2024-08-15 17:28:59.795412333 +0000 UTC m=+25.028306997 (delta=83.991939ms)
	I0815 17:28:59.901120   32399 fix.go:200] guest clock delta is within tolerance: 83.991939ms
	I0815 17:28:59.901125   32399 start.go:83] releasing machines lock for "ha-683878", held for 25.03223627s
	I0815 17:28:59.901144   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:59.901396   32399 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:28:59.903603   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.903923   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.903949   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.904114   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:59.904612   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:59.904814   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:28:59.904900   32399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:28:59.904937   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:59.905033   32399 ssh_runner.go:195] Run: cat /version.json
	I0815 17:28:59.905058   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:28:59.907127   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.907468   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.907505   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.907528   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.907584   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:59.907785   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.907866   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:28:59.907890   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:28:59.907955   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:59.908069   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:28:59.908123   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:28:59.908212   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:28:59.908352   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:28:59.908482   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:28:59.982274   32399 ssh_runner.go:195] Run: systemctl --version
	I0815 17:29:00.010603   32399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:29:00.173386   32399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 17:29:00.179262   32399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 17:29:00.179328   32399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:29:00.195996   32399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 17:29:00.196018   32399 start.go:495] detecting cgroup driver to use...
	I0815 17:29:00.196090   32399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:29:00.212762   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:29:00.225540   32399 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:29:00.225588   32399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:29:00.239169   32399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:29:00.252624   32399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:29:00.371331   32399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:29:00.532347   32399 docker.go:233] disabling docker service ...
	I0815 17:29:00.532421   32399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:29:00.547210   32399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:29:00.559940   32399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:29:00.671778   32399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:29:00.781500   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:29:00.795997   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:29:00.814573   32399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:29:00.814636   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.825112   32399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:29:00.825188   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.835607   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.845889   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.856124   32399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:29:00.866904   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.877044   32399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.893637   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:00.904174   32399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:29:00.913740   32399 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 17:29:00.913787   32399 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 17:29:00.927332   32399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:29:00.937108   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:29:01.047868   32399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:29:01.180694   32399 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:29:01.180752   32399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:29:01.185847   32399 start.go:563] Will wait 60s for crictl version
	I0815 17:29:01.185887   32399 ssh_runner.go:195] Run: which crictl
	I0815 17:29:01.189535   32399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:29:01.227446   32399 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 17:29:01.227527   32399 ssh_runner.go:195] Run: crio --version
	I0815 17:29:01.256693   32399 ssh_runner.go:195] Run: crio --version
	I0815 17:29:01.288058   32399 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 17:29:01.289397   32399 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:29:01.291758   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:01.292117   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:29:01.292142   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:01.292296   32399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 17:29:01.296691   32399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:29:01.309238   32399 kubeadm.go:883] updating cluster {Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 17:29:01.309336   32399 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:29:01.309380   32399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:29:01.345370   32399 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 17:29:01.345438   32399 ssh_runner.go:195] Run: which lz4
	I0815 17:29:01.349279   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0815 17:29:01.349352   32399 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 17:29:01.353590   32399 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 17:29:01.353620   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 17:29:02.641678   32399 crio.go:462] duration metric: took 1.292340744s to copy over tarball
	I0815 17:29:02.641734   32399 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 17:29:04.650799   32399 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.009042805s)
	I0815 17:29:04.650821   32399 crio.go:469] duration metric: took 2.009122075s to extract the tarball
	I0815 17:29:04.650828   32399 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 17:29:04.687959   32399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:29:04.732018   32399 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:29:04.732040   32399 cache_images.go:84] Images are preloaded, skipping loading
	I0815 17:29:04.732049   32399 kubeadm.go:934] updating node { 192.168.39.17 8443 v1.31.0 crio true true} ...
	I0815 17:29:04.732185   32399 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:29:04.732267   32399 ssh_runner.go:195] Run: crio config
	I0815 17:29:04.776215   32399 cni.go:84] Creating CNI manager for ""
	I0815 17:29:04.776232   32399 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 17:29:04.776241   32399 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 17:29:04.776266   32399 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.17 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-683878 NodeName:ha-683878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 17:29:04.776440   32399 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-683878"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 17:29:04.776467   32399 kube-vip.go:115] generating kube-vip config ...
	I0815 17:29:04.776535   32399 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 17:29:04.794390   32399 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 17:29:04.794511   32399 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0815 17:29:04.794575   32399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:29:04.804647   32399 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:29:04.804712   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 17:29:04.814079   32399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0815 17:29:04.830492   32399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:29:04.846899   32399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0815 17:29:04.863275   32399 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0815 17:29:04.879299   32399 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 17:29:04.883154   32399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:29:04.896153   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:29:05.008398   32399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:29:05.026462   32399 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878 for IP: 192.168.39.17
	I0815 17:29:05.026485   32399 certs.go:194] generating shared ca certs ...
	I0815 17:29:05.026506   32399 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.026673   32399 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 17:29:05.026724   32399 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 17:29:05.026737   32399 certs.go:256] generating profile certs ...
	I0815 17:29:05.026802   32399 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key
	I0815 17:29:05.026838   32399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.crt with IP's: []
	I0815 17:29:05.243686   32399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.crt ...
	I0815 17:29:05.243713   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.crt: {Name:mka6b0ae4d3b6108f0dde5d6e013160dcf23c1a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.243889   32399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key ...
	I0815 17:29:05.243906   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key: {Name:mk884d016cc8b0e5b7de4262c0afd40292798185 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.244004   32399 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.84f93edb
	I0815 17:29:05.244026   32399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.84f93edb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.17 192.168.39.254]
	I0815 17:29:05.345591   32399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.84f93edb ...
	I0815 17:29:05.345617   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.84f93edb: {Name:mkec3bc615edae99a0ab078c330d2505b6f94ca3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.345790   32399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.84f93edb ...
	I0815 17:29:05.345807   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.84f93edb: {Name:mk289a9480cee4e4b94a92537ac1cfa80a7cf9a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.345899   32399 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.84f93edb -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt
	I0815 17:29:05.346006   32399 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.84f93edb -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key
	I0815 17:29:05.346078   32399 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key
	I0815 17:29:05.346099   32399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt with IP's: []
	I0815 17:29:05.492320   32399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt ...
	I0815 17:29:05.492348   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt: {Name:mk01a3faddbf012a325f4a20b2b1715c093a8885 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.492526   32399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key ...
	I0815 17:29:05.492543   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key: {Name:mk0737ef679a14beb8d241632c98c89dd65363db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:05.492638   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 17:29:05.492662   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 17:29:05.492682   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 17:29:05.492701   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 17:29:05.492721   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 17:29:05.492739   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 17:29:05.492751   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 17:29:05.492768   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 17:29:05.492835   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 17:29:05.492880   32399 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 17:29:05.492894   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 17:29:05.492927   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 17:29:05.492958   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:29:05.492988   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 17:29:05.493044   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:29:05.493080   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /usr/share/ca-certificates/202192.pem
	I0815 17:29:05.493100   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:05.493119   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem -> /usr/share/ca-certificates/20219.pem
	I0815 17:29:05.493679   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:29:05.520195   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:29:05.544164   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:29:05.568703   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 17:29:05.593486   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 17:29:05.618046   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 17:29:05.642052   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:29:05.665957   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:29:05.690404   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 17:29:05.715449   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:29:05.738950   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 17:29:05.771497   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 17:29:05.819020   32399 ssh_runner.go:195] Run: openssl version
	I0815 17:29:05.826728   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 17:29:05.842367   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 17:29:05.847050   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 17:29:05.847138   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 17:29:05.853164   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:29:05.863863   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:29:05.874594   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:05.878999   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:05.879049   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:05.884880   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:29:05.895486   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 17:29:05.906013   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 17:29:05.910976   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 17:29:05.911016   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 17:29:05.916970   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 17:29:05.927441   32399 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:29:05.931725   32399 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 17:29:05.931792   32399 kubeadm.go:392] StartCluster: {Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:29:05.931877   32399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 17:29:05.931914   32399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 17:29:05.970023   32399 cri.go:89] found id: ""
	I0815 17:29:05.970092   32399 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 17:29:05.980972   32399 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 17:29:05.990882   32399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 17:29:06.000633   32399 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 17:29:06.000655   32399 kubeadm.go:157] found existing configuration files:
	
	I0815 17:29:06.000704   32399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 17:29:06.009868   32399 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 17:29:06.009936   32399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 17:29:06.019505   32399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 17:29:06.028653   32399 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 17:29:06.028769   32399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 17:29:06.038132   32399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 17:29:06.046992   32399 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 17:29:06.047037   32399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 17:29:06.055976   32399 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 17:29:06.064527   32399 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 17:29:06.064565   32399 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 17:29:06.073712   32399 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 17:29:06.175782   32399 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 17:29:06.175999   32399 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 17:29:06.276047   32399 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 17:29:06.276216   32399 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 17:29:06.276346   32399 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 17:29:06.285277   32399 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 17:29:06.444404   32399 out.go:235]   - Generating certificates and keys ...
	I0815 17:29:06.444552   32399 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 17:29:06.444645   32399 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 17:29:06.553231   32399 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 17:29:06.633700   32399 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 17:29:06.800062   32399 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 17:29:07.034589   32399 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 17:29:07.097287   32399 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 17:29:07.097535   32399 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-683878 localhost] and IPs [192.168.39.17 127.0.0.1 ::1]
	I0815 17:29:07.194740   32399 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 17:29:07.194996   32399 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-683878 localhost] and IPs [192.168.39.17 127.0.0.1 ::1]
	I0815 17:29:07.496079   32399 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 17:29:07.810924   32399 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 17:29:08.036559   32399 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 17:29:08.036848   32399 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 17:29:08.161049   32399 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 17:29:08.286279   32399 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 17:29:08.342451   32399 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 17:29:08.771981   32399 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 17:29:08.982305   32399 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 17:29:08.982988   32399 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 17:29:08.986841   32399 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 17:29:08.988740   32399 out.go:235]   - Booting up control plane ...
	I0815 17:29:08.988838   32399 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 17:29:08.988964   32399 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 17:29:08.989697   32399 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 17:29:09.008408   32399 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 17:29:09.014240   32399 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 17:29:09.014299   32399 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 17:29:09.143041   32399 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 17:29:09.143184   32399 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 17:29:09.644268   32399 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.56421ms
	I0815 17:29:09.644370   32399 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 17:29:15.738721   32399 kubeadm.go:310] [api-check] The API server is healthy after 6.097532107s
	I0815 17:29:15.750426   32399 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 17:29:15.763826   32399 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 17:29:15.784883   32399 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 17:29:15.785121   32399 kubeadm.go:310] [mark-control-plane] Marking the node ha-683878 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 17:29:15.799423   32399 kubeadm.go:310] [bootstrap-token] Using token: wla41g.09q7zejczut0pxz8
	I0815 17:29:15.800876   32399 out.go:235]   - Configuring RBAC rules ...
	I0815 17:29:15.800993   32399 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 17:29:15.806024   32399 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 17:29:15.812326   32399 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 17:29:15.815476   32399 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 17:29:15.823202   32399 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 17:29:15.826870   32399 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 17:29:16.145776   32399 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 17:29:16.580969   32399 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 17:29:17.145982   32399 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 17:29:17.146003   32399 kubeadm.go:310] 
	I0815 17:29:17.146068   32399 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 17:29:17.146075   32399 kubeadm.go:310] 
	I0815 17:29:17.146167   32399 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 17:29:17.146192   32399 kubeadm.go:310] 
	I0815 17:29:17.146247   32399 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 17:29:17.146347   32399 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 17:29:17.146432   32399 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 17:29:17.146450   32399 kubeadm.go:310] 
	I0815 17:29:17.146525   32399 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 17:29:17.146538   32399 kubeadm.go:310] 
	I0815 17:29:17.146609   32399 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 17:29:17.146618   32399 kubeadm.go:310] 
	I0815 17:29:17.146689   32399 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 17:29:17.146787   32399 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 17:29:17.146891   32399 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 17:29:17.146904   32399 kubeadm.go:310] 
	I0815 17:29:17.147017   32399 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 17:29:17.147124   32399 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 17:29:17.147132   32399 kubeadm.go:310] 
	I0815 17:29:17.147235   32399 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wla41g.09q7zejczut0pxz8 \
	I0815 17:29:17.147372   32399 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 \
	I0815 17:29:17.147403   32399 kubeadm.go:310] 	--control-plane 
	I0815 17:29:17.147409   32399 kubeadm.go:310] 
	I0815 17:29:17.147528   32399 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 17:29:17.147539   32399 kubeadm.go:310] 
	I0815 17:29:17.147670   32399 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wla41g.09q7zejczut0pxz8 \
	I0815 17:29:17.147847   32399 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 
	I0815 17:29:17.148770   32399 kubeadm.go:310] W0815 17:29:06.157046     850 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:29:17.149063   32399 kubeadm.go:310] W0815 17:29:06.158172     850 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:29:17.149241   32399 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 17:29:17.149286   32399 cni.go:84] Creating CNI manager for ""
	I0815 17:29:17.149301   32399 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0815 17:29:17.151041   32399 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 17:29:17.152275   32399 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 17:29:17.157233   32399 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 17:29:17.157248   32399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 17:29:17.179278   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 17:29:17.521540   32399 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 17:29:17.521631   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:17.521673   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-683878 minikube.k8s.io/updated_at=2024_08_15T17_29_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=ha-683878 minikube.k8s.io/primary=true
	I0815 17:29:17.709455   32399 ops.go:34] apiserver oom_adj: -16
	I0815 17:29:17.712122   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:18.213088   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:18.713021   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:19.212622   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:19.712707   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:20.213020   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:20.713162   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:29:20.848805   32399 kubeadm.go:1113] duration metric: took 3.327234503s to wait for elevateKubeSystemPrivileges
	I0815 17:29:20.848841   32399 kubeadm.go:394] duration metric: took 14.917053977s to StartCluster
	I0815 17:29:20.848878   32399 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:20.848957   32399 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:29:20.849640   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:20.849835   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 17:29:20.849849   32399 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:29:20.849870   32399 start.go:241] waiting for startup goroutines ...
	I0815 17:29:20.849884   32399 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 17:29:20.849948   32399 addons.go:69] Setting storage-provisioner=true in profile "ha-683878"
	I0815 17:29:20.849958   32399 addons.go:69] Setting default-storageclass=true in profile "ha-683878"
	I0815 17:29:20.849984   32399 addons.go:234] Setting addon storage-provisioner=true in "ha-683878"
	I0815 17:29:20.850000   32399 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-683878"
	I0815 17:29:20.850014   32399 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:29:20.850346   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:20.850384   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:20.850564   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:29:20.850662   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:20.850706   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:20.864882   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0815 17:29:20.865006   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40189
	I0815 17:29:20.865421   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:20.865458   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:20.865940   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:20.865953   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:20.866103   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:20.866139   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:20.866273   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:20.866438   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:20.866622   32399 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:29:20.866800   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:20.866836   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:20.868650   32399 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:29:20.868892   32399 kapi.go:59] client config for ha-683878: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.crt", KeyFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key", CAFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 17:29:20.869345   32399 cert_rotation.go:140] Starting client certificate rotation controller
	I0815 17:29:20.869613   32399 addons.go:234] Setting addon default-storageclass=true in "ha-683878"
	I0815 17:29:20.869649   32399 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:29:20.869924   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:20.869961   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:20.881677   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34441
	I0815 17:29:20.882095   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:20.882700   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:20.882726   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:20.883095   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:20.883282   32399 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:29:20.883623   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36671
	I0815 17:29:20.884131   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:20.884661   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:20.884680   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:20.885007   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:29:20.885046   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:20.885555   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:20.885617   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:20.887106   32399 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 17:29:20.888541   32399 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:29:20.888553   32399 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 17:29:20.888566   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:29:20.891279   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:20.891690   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:29:20.891719   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:20.891801   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:29:20.891957   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:29:20.892101   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:29:20.892191   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:29:20.900393   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32909
	I0815 17:29:20.900699   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:20.901148   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:20.901168   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:20.901414   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:20.901598   32399 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:29:20.902863   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:29:20.903044   32399 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 17:29:20.903064   32399 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 17:29:20.903080   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:29:20.905254   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:20.905602   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:29:20.905629   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:20.905740   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:29:20.905878   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:29:20.906011   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:29:20.906140   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:29:21.020145   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 17:29:21.024428   32399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:29:21.090739   32399 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 17:29:21.725433   32399 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0815 17:29:21.914443   32399 main.go:141] libmachine: Making call to close driver server
	I0815 17:29:21.914473   32399 main.go:141] libmachine: (ha-683878) Calling .Close
	I0815 17:29:21.914486   32399 main.go:141] libmachine: Making call to close driver server
	I0815 17:29:21.914509   32399 main.go:141] libmachine: (ha-683878) Calling .Close
	I0815 17:29:21.914738   32399 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:29:21.914751   32399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:29:21.914759   32399 main.go:141] libmachine: Making call to close driver server
	I0815 17:29:21.914767   32399 main.go:141] libmachine: (ha-683878) Calling .Close
	I0815 17:29:21.914858   32399 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:29:21.914880   32399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:29:21.914899   32399 main.go:141] libmachine: Making call to close driver server
	I0815 17:29:21.914911   32399 main.go:141] libmachine: (ha-683878) Calling .Close
	I0815 17:29:21.914891   32399 main.go:141] libmachine: (ha-683878) DBG | Closing plugin on server side
	I0815 17:29:21.914963   32399 main.go:141] libmachine: (ha-683878) DBG | Closing plugin on server side
	I0815 17:29:21.914964   32399 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:29:21.914994   32399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:29:21.916122   32399 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:29:21.916140   32399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:29:21.916153   32399 main.go:141] libmachine: (ha-683878) DBG | Closing plugin on server side
	I0815 17:29:21.916211   32399 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 17:29:21.916228   32399 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 17:29:21.916330   32399 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0815 17:29:21.916346   32399 round_trippers.go:469] Request Headers:
	I0815 17:29:21.916358   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:29:21.916366   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:29:21.929821   32399 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0815 17:29:21.930343   32399 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0815 17:29:21.930357   32399 round_trippers.go:469] Request Headers:
	I0815 17:29:21.930366   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:29:21.930371   32399 round_trippers.go:473]     Content-Type: application/json
	I0815 17:29:21.930376   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:29:21.933391   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:29:21.933526   32399 main.go:141] libmachine: Making call to close driver server
	I0815 17:29:21.933541   32399 main.go:141] libmachine: (ha-683878) Calling .Close
	I0815 17:29:21.933764   32399 main.go:141] libmachine: Successfully made call to close driver server
	I0815 17:29:21.933778   32399 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 17:29:21.933799   32399 main.go:141] libmachine: (ha-683878) DBG | Closing plugin on server side
	I0815 17:29:21.936353   32399 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0815 17:29:21.937522   32399 addons.go:510] duration metric: took 1.087634995s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0815 17:29:21.937552   32399 start.go:246] waiting for cluster config update ...
	I0815 17:29:21.937562   32399 start.go:255] writing updated cluster config ...
	I0815 17:29:21.939000   32399 out.go:201] 
	I0815 17:29:21.940316   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:29:21.940375   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:29:21.941919   32399 out.go:177] * Starting "ha-683878-m02" control-plane node in "ha-683878" cluster
	I0815 17:29:21.943129   32399 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:29:21.943157   32399 cache.go:56] Caching tarball of preloaded images
	I0815 17:29:21.943264   32399 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:29:21.943282   32399 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:29:21.943366   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:29:21.943571   32399 start.go:360] acquireMachinesLock for ha-683878-m02: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:29:21.943622   32399 start.go:364] duration metric: took 26.945µs to acquireMachinesLock for "ha-683878-m02"
	I0815 17:29:21.943643   32399 start.go:93] Provisioning new machine with config: &{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:29:21.943778   32399 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0815 17:29:21.945415   32399 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:29:21.945522   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:21.945550   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:21.959676   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38299
	I0815 17:29:21.960075   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:21.960532   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:21.960554   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:21.960870   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:21.961043   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetMachineName
	I0815 17:29:21.961214   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:21.961389   32399 start.go:159] libmachine.API.Create for "ha-683878" (driver="kvm2")
	I0815 17:29:21.961413   32399 client.go:168] LocalClient.Create starting
	I0815 17:29:21.961439   32399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem
	I0815 17:29:21.961469   32399 main.go:141] libmachine: Decoding PEM data...
	I0815 17:29:21.961483   32399 main.go:141] libmachine: Parsing certificate...
	I0815 17:29:21.961533   32399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem
	I0815 17:29:21.961553   32399 main.go:141] libmachine: Decoding PEM data...
	I0815 17:29:21.961564   32399 main.go:141] libmachine: Parsing certificate...
	I0815 17:29:21.961579   32399 main.go:141] libmachine: Running pre-create checks...
	I0815 17:29:21.961587   32399 main.go:141] libmachine: (ha-683878-m02) Calling .PreCreateCheck
	I0815 17:29:21.961769   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetConfigRaw
	I0815 17:29:21.962172   32399 main.go:141] libmachine: Creating machine...
	I0815 17:29:21.962185   32399 main.go:141] libmachine: (ha-683878-m02) Calling .Create
	I0815 17:29:21.962307   32399 main.go:141] libmachine: (ha-683878-m02) Creating KVM machine...
	I0815 17:29:21.963437   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found existing default KVM network
	I0815 17:29:21.963661   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found existing private KVM network mk-ha-683878
	I0815 17:29:21.963750   32399 main.go:141] libmachine: (ha-683878-m02) Setting up store path in /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02 ...
	I0815 17:29:21.963770   32399 main.go:141] libmachine: (ha-683878-m02) Building disk image from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 17:29:21.963829   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:21.963728   32793 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:29:21.963917   32399 main.go:141] libmachine: (ha-683878-m02) Downloading /home/jenkins/minikube-integration/19450-13013/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 17:29:22.189623   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:22.189489   32793 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa...
	I0815 17:29:22.483552   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:22.483427   32793 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/ha-683878-m02.rawdisk...
	I0815 17:29:22.483580   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Writing magic tar header
	I0815 17:29:22.483590   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Writing SSH key tar header
	I0815 17:29:22.483598   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:22.483552   32793 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02 ...
	I0815 17:29:22.483690   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02
	I0815 17:29:22.483709   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines
	I0815 17:29:22.483718   32399 main.go:141] libmachine: (ha-683878-m02) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02 (perms=drwx------)
	I0815 17:29:22.483743   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:29:22.483768   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013
	I0815 17:29:22.483780   32399 main.go:141] libmachine: (ha-683878-m02) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines (perms=drwxr-xr-x)
	I0815 17:29:22.483800   32399 main.go:141] libmachine: (ha-683878-m02) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube (perms=drwxr-xr-x)
	I0815 17:29:22.483812   32399 main.go:141] libmachine: (ha-683878-m02) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013 (perms=drwxrwxr-x)
	I0815 17:29:22.483825   32399 main.go:141] libmachine: (ha-683878-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 17:29:22.483836   32399 main.go:141] libmachine: (ha-683878-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 17:29:22.483861   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 17:29:22.483871   32399 main.go:141] libmachine: (ha-683878-m02) Creating domain...
	I0815 17:29:22.483877   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home/jenkins
	I0815 17:29:22.483883   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Checking permissions on dir: /home
	I0815 17:29:22.483888   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Skipping /home - not owner
	I0815 17:29:22.484846   32399 main.go:141] libmachine: (ha-683878-m02) define libvirt domain using xml: 
	I0815 17:29:22.484868   32399 main.go:141] libmachine: (ha-683878-m02) <domain type='kvm'>
	I0815 17:29:22.484879   32399 main.go:141] libmachine: (ha-683878-m02)   <name>ha-683878-m02</name>
	I0815 17:29:22.484896   32399 main.go:141] libmachine: (ha-683878-m02)   <memory unit='MiB'>2200</memory>
	I0815 17:29:22.484906   32399 main.go:141] libmachine: (ha-683878-m02)   <vcpu>2</vcpu>
	I0815 17:29:22.484915   32399 main.go:141] libmachine: (ha-683878-m02)   <features>
	I0815 17:29:22.484925   32399 main.go:141] libmachine: (ha-683878-m02)     <acpi/>
	I0815 17:29:22.484932   32399 main.go:141] libmachine: (ha-683878-m02)     <apic/>
	I0815 17:29:22.484938   32399 main.go:141] libmachine: (ha-683878-m02)     <pae/>
	I0815 17:29:22.484944   32399 main.go:141] libmachine: (ha-683878-m02)     
	I0815 17:29:22.484950   32399 main.go:141] libmachine: (ha-683878-m02)   </features>
	I0815 17:29:22.484957   32399 main.go:141] libmachine: (ha-683878-m02)   <cpu mode='host-passthrough'>
	I0815 17:29:22.484962   32399 main.go:141] libmachine: (ha-683878-m02)   
	I0815 17:29:22.484972   32399 main.go:141] libmachine: (ha-683878-m02)   </cpu>
	I0815 17:29:22.484990   32399 main.go:141] libmachine: (ha-683878-m02)   <os>
	I0815 17:29:22.485007   32399 main.go:141] libmachine: (ha-683878-m02)     <type>hvm</type>
	I0815 17:29:22.485020   32399 main.go:141] libmachine: (ha-683878-m02)     <boot dev='cdrom'/>
	I0815 17:29:22.485030   32399 main.go:141] libmachine: (ha-683878-m02)     <boot dev='hd'/>
	I0815 17:29:22.485042   32399 main.go:141] libmachine: (ha-683878-m02)     <bootmenu enable='no'/>
	I0815 17:29:22.485049   32399 main.go:141] libmachine: (ha-683878-m02)   </os>
	I0815 17:29:22.485055   32399 main.go:141] libmachine: (ha-683878-m02)   <devices>
	I0815 17:29:22.485063   32399 main.go:141] libmachine: (ha-683878-m02)     <disk type='file' device='cdrom'>
	I0815 17:29:22.485072   32399 main.go:141] libmachine: (ha-683878-m02)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/boot2docker.iso'/>
	I0815 17:29:22.485085   32399 main.go:141] libmachine: (ha-683878-m02)       <target dev='hdc' bus='scsi'/>
	I0815 17:29:22.485093   32399 main.go:141] libmachine: (ha-683878-m02)       <readonly/>
	I0815 17:29:22.485104   32399 main.go:141] libmachine: (ha-683878-m02)     </disk>
	I0815 17:29:22.485115   32399 main.go:141] libmachine: (ha-683878-m02)     <disk type='file' device='disk'>
	I0815 17:29:22.485128   32399 main.go:141] libmachine: (ha-683878-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 17:29:22.485141   32399 main.go:141] libmachine: (ha-683878-m02)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/ha-683878-m02.rawdisk'/>
	I0815 17:29:22.485153   32399 main.go:141] libmachine: (ha-683878-m02)       <target dev='hda' bus='virtio'/>
	I0815 17:29:22.485171   32399 main.go:141] libmachine: (ha-683878-m02)     </disk>
	I0815 17:29:22.485190   32399 main.go:141] libmachine: (ha-683878-m02)     <interface type='network'>
	I0815 17:29:22.485201   32399 main.go:141] libmachine: (ha-683878-m02)       <source network='mk-ha-683878'/>
	I0815 17:29:22.485212   32399 main.go:141] libmachine: (ha-683878-m02)       <model type='virtio'/>
	I0815 17:29:22.485218   32399 main.go:141] libmachine: (ha-683878-m02)     </interface>
	I0815 17:29:22.485227   32399 main.go:141] libmachine: (ha-683878-m02)     <interface type='network'>
	I0815 17:29:22.485234   32399 main.go:141] libmachine: (ha-683878-m02)       <source network='default'/>
	I0815 17:29:22.485241   32399 main.go:141] libmachine: (ha-683878-m02)       <model type='virtio'/>
	I0815 17:29:22.485249   32399 main.go:141] libmachine: (ha-683878-m02)     </interface>
	I0815 17:29:22.485260   32399 main.go:141] libmachine: (ha-683878-m02)     <serial type='pty'>
	I0815 17:29:22.485271   32399 main.go:141] libmachine: (ha-683878-m02)       <target port='0'/>
	I0815 17:29:22.485283   32399 main.go:141] libmachine: (ha-683878-m02)     </serial>
	I0815 17:29:22.485299   32399 main.go:141] libmachine: (ha-683878-m02)     <console type='pty'>
	I0815 17:29:22.485308   32399 main.go:141] libmachine: (ha-683878-m02)       <target type='serial' port='0'/>
	I0815 17:29:22.485312   32399 main.go:141] libmachine: (ha-683878-m02)     </console>
	I0815 17:29:22.485317   32399 main.go:141] libmachine: (ha-683878-m02)     <rng model='virtio'>
	I0815 17:29:22.485326   32399 main.go:141] libmachine: (ha-683878-m02)       <backend model='random'>/dev/random</backend>
	I0815 17:29:22.485337   32399 main.go:141] libmachine: (ha-683878-m02)     </rng>
	I0815 17:29:22.485345   32399 main.go:141] libmachine: (ha-683878-m02)     
	I0815 17:29:22.485359   32399 main.go:141] libmachine: (ha-683878-m02)     
	I0815 17:29:22.485372   32399 main.go:141] libmachine: (ha-683878-m02)   </devices>
	I0815 17:29:22.485383   32399 main.go:141] libmachine: (ha-683878-m02) </domain>
	I0815 17:29:22.485399   32399 main.go:141] libmachine: (ha-683878-m02) 
	I0815 17:29:22.491722   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:ba:76:17 in network default
	I0815 17:29:22.492242   32399 main.go:141] libmachine: (ha-683878-m02) Ensuring networks are active...
	I0815 17:29:22.492263   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:22.492926   32399 main.go:141] libmachine: (ha-683878-m02) Ensuring network default is active
	I0815 17:29:22.493249   32399 main.go:141] libmachine: (ha-683878-m02) Ensuring network mk-ha-683878 is active
	I0815 17:29:22.493559   32399 main.go:141] libmachine: (ha-683878-m02) Getting domain xml...
	I0815 17:29:22.494271   32399 main.go:141] libmachine: (ha-683878-m02) Creating domain...
	I0815 17:29:23.710119   32399 main.go:141] libmachine: (ha-683878-m02) Waiting to get IP...
	I0815 17:29:23.710759   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:23.711081   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:23.711101   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:23.711072   32793 retry.go:31] will retry after 262.72363ms: waiting for machine to come up
	I0815 17:29:23.975486   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:23.975928   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:23.975955   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:23.975897   32793 retry.go:31] will retry after 247.473384ms: waiting for machine to come up
	I0815 17:29:24.225431   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:24.225806   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:24.225831   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:24.225773   32793 retry.go:31] will retry after 384.972078ms: waiting for machine to come up
	I0815 17:29:24.612321   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:24.612824   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:24.612840   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:24.612795   32793 retry.go:31] will retry after 518.994074ms: waiting for machine to come up
	I0815 17:29:25.133498   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:25.133957   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:25.133975   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:25.133932   32793 retry.go:31] will retry after 584.32884ms: waiting for machine to come up
	I0815 17:29:25.719541   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:25.719896   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:25.719923   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:25.719849   32793 retry.go:31] will retry after 842.277729ms: waiting for machine to come up
	I0815 17:29:26.563298   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:26.563685   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:26.563716   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:26.563637   32793 retry.go:31] will retry after 746.421072ms: waiting for machine to come up
	I0815 17:29:27.311847   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:27.312238   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:27.312271   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:27.312216   32793 retry.go:31] will retry after 1.160084319s: waiting for machine to come up
	I0815 17:29:28.473590   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:28.474008   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:28.474037   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:28.473971   32793 retry.go:31] will retry after 1.680079708s: waiting for machine to come up
	I0815 17:29:30.156202   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:30.156758   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:30.156790   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:30.156689   32793 retry.go:31] will retry after 1.986616449s: waiting for machine to come up
	I0815 17:29:32.145220   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:32.145625   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:32.145653   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:32.145582   32793 retry.go:31] will retry after 1.99509911s: waiting for machine to come up
	I0815 17:29:34.143673   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:34.144070   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:34.144092   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:34.144021   32793 retry.go:31] will retry after 3.609024527s: waiting for machine to come up
	I0815 17:29:37.754686   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:37.755077   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:37.755135   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:37.755055   32793 retry.go:31] will retry after 3.656239832s: waiting for machine to come up
	I0815 17:29:41.413427   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:41.413718   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find current IP address of domain ha-683878-m02 in network mk-ha-683878
	I0815 17:29:41.413737   32399 main.go:141] libmachine: (ha-683878-m02) DBG | I0815 17:29:41.413694   32793 retry.go:31] will retry after 4.461974251s: waiting for machine to come up
	I0815 17:29:45.878653   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:45.879085   32399 main.go:141] libmachine: (ha-683878-m02) Found IP for machine: 192.168.39.232
	I0815 17:29:45.879110   32399 main.go:141] libmachine: (ha-683878-m02) Reserving static IP address...
	I0815 17:29:45.879124   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has current primary IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:45.879624   32399 main.go:141] libmachine: (ha-683878-m02) DBG | unable to find host DHCP lease matching {name: "ha-683878-m02", mac: "52:54:00:85:ab:06", ip: "192.168.39.232"} in network mk-ha-683878
	I0815 17:29:45.948788   32399 main.go:141] libmachine: (ha-683878-m02) Reserved static IP address: 192.168.39.232
	I0815 17:29:45.948813   32399 main.go:141] libmachine: (ha-683878-m02) Waiting for SSH to be available...
	I0815 17:29:45.948822   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Getting to WaitForSSH function...
	I0815 17:29:45.951204   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:45.951628   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:minikube Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:45.951660   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:45.951813   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Using SSH client type: external
	I0815 17:29:45.951831   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa (-rw-------)
	I0815 17:29:45.951863   32399 main.go:141] libmachine: (ha-683878-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 17:29:45.951882   32399 main.go:141] libmachine: (ha-683878-m02) DBG | About to run SSH command:
	I0815 17:29:45.951896   32399 main.go:141] libmachine: (ha-683878-m02) DBG | exit 0
	I0815 17:29:46.072523   32399 main.go:141] libmachine: (ha-683878-m02) DBG | SSH cmd err, output: <nil>: 
	I0815 17:29:46.072743   32399 main.go:141] libmachine: (ha-683878-m02) KVM machine creation complete!
	I0815 17:29:46.073108   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetConfigRaw
	I0815 17:29:46.073642   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:46.073868   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:46.074054   32399 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 17:29:46.074070   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetState
	I0815 17:29:46.075264   32399 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 17:29:46.075280   32399 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 17:29:46.075288   32399 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 17:29:46.075296   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.077684   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.078048   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.078089   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.078236   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:46.078425   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.078601   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.078762   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:46.078922   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:29:46.079097   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0815 17:29:46.079107   32399 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 17:29:46.183648   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:29:46.183666   32399 main.go:141] libmachine: Detecting the provisioner...
	I0815 17:29:46.183673   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.186236   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.186540   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.186564   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.186696   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:46.186864   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.187033   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.187182   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:46.187309   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:29:46.187511   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0815 17:29:46.187522   32399 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 17:29:46.289046   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 17:29:46.289126   32399 main.go:141] libmachine: found compatible host: buildroot
	I0815 17:29:46.289140   32399 main.go:141] libmachine: Provisioning with buildroot...
	I0815 17:29:46.289150   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetMachineName
	I0815 17:29:46.289395   32399 buildroot.go:166] provisioning hostname "ha-683878-m02"
	I0815 17:29:46.289419   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetMachineName
	I0815 17:29:46.289625   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.292225   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.292594   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.292619   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.292796   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:46.292966   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.293120   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.293247   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:46.293418   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:29:46.293595   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0815 17:29:46.293611   32399 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683878-m02 && echo "ha-683878-m02" | sudo tee /etc/hostname
	I0815 17:29:46.410956   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683878-m02
	
	I0815 17:29:46.410983   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.413462   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.413775   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.413803   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.413942   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:46.414120   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.414257   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.414425   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:46.414558   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:29:46.414727   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0815 17:29:46.414743   32399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683878-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683878-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683878-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:29:46.525032   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:29:46.525061   32399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 17:29:46.525080   32399 buildroot.go:174] setting up certificates
	I0815 17:29:46.525088   32399 provision.go:84] configureAuth start
	I0815 17:29:46.525097   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetMachineName
	I0815 17:29:46.525380   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:29:46.527520   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.527851   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.527872   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.528001   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.530027   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.530338   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.530362   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.530457   32399 provision.go:143] copyHostCerts
	I0815 17:29:46.530496   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:29:46.530525   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 17:29:46.530533   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:29:46.530595   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 17:29:46.530665   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:29:46.530682   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 17:29:46.530687   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:29:46.530709   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 17:29:46.530748   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:29:46.530764   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 17:29:46.530769   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:29:46.530787   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 17:29:46.530830   32399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.ha-683878-m02 san=[127.0.0.1 192.168.39.232 ha-683878-m02 localhost minikube]
	I0815 17:29:46.603808   32399 provision.go:177] copyRemoteCerts
	I0815 17:29:46.603862   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:29:46.603885   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.606406   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.606664   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.606690   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.606845   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:46.607007   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.607174   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:46.607311   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	I0815 17:29:46.686765   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 17:29:46.686848   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 17:29:46.714440   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 17:29:46.714513   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 17:29:46.740563   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 17:29:46.740634   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 17:29:46.766101   32399 provision.go:87] duration metric: took 240.999673ms to configureAuth
	I0815 17:29:46.766129   32399 buildroot.go:189] setting minikube options for container-runtime
	I0815 17:29:46.766339   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:29:46.766406   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:46.769092   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.769406   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:46.769430   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:46.769535   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:46.769707   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.769874   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:46.770015   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:46.770189   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:29:46.770362   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0815 17:29:46.770377   32399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:29:47.035837   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:29:47.035866   32399 main.go:141] libmachine: Checking connection to Docker...
	I0815 17:29:47.035876   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetURL
	I0815 17:29:47.037224   32399 main.go:141] libmachine: (ha-683878-m02) DBG | Using libvirt version 6000000
	I0815 17:29:47.039511   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.039863   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.039891   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.040079   32399 main.go:141] libmachine: Docker is up and running!
	I0815 17:29:47.040093   32399 main.go:141] libmachine: Reticulating splines...
	I0815 17:29:47.040101   32399 client.go:171] duration metric: took 25.078679128s to LocalClient.Create
	I0815 17:29:47.040127   32399 start.go:167] duration metric: took 25.078737115s to libmachine.API.Create "ha-683878"
	I0815 17:29:47.040146   32399 start.go:293] postStartSetup for "ha-683878-m02" (driver="kvm2")
	I0815 17:29:47.040160   32399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:29:47.040181   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:47.040402   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:29:47.040422   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:47.042232   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.042511   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.042539   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.042651   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:47.042803   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:47.042933   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:47.043069   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	I0815 17:29:47.122740   32399 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:29:47.127067   32399 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 17:29:47.127097   32399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 17:29:47.127175   32399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 17:29:47.127259   32399 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 17:29:47.127270   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /etc/ssl/certs/202192.pem
	I0815 17:29:47.127349   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:29:47.136399   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:29:47.161183   32399 start.go:296] duration metric: took 121.024015ms for postStartSetup
	I0815 17:29:47.161234   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetConfigRaw
	I0815 17:29:47.161791   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:29:47.164161   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.164539   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.164562   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.164857   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:29:47.165036   32399 start.go:128] duration metric: took 25.221244837s to createHost
	I0815 17:29:47.165059   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:47.167218   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.167508   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.167534   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.167630   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:47.167829   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:47.167986   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:47.168206   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:47.168380   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:29:47.168594   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.232 22 <nil> <nil>}
	I0815 17:29:47.168608   32399 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 17:29:47.269418   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723742987.248295185
	
	I0815 17:29:47.269438   32399 fix.go:216] guest clock: 1723742987.248295185
	I0815 17:29:47.269448   32399 fix.go:229] Guest: 2024-08-15 17:29:47.248295185 +0000 UTC Remote: 2024-08-15 17:29:47.165046704 +0000 UTC m=+72.397941365 (delta=83.248481ms)
	I0815 17:29:47.269475   32399 fix.go:200] guest clock delta is within tolerance: 83.248481ms
	I0815 17:29:47.269482   32399 start.go:83] releasing machines lock for "ha-683878-m02", held for 25.325849025s
	I0815 17:29:47.269503   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:47.269773   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:29:47.272069   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.272473   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.272513   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.274690   32399 out.go:177] * Found network options:
	I0815 17:29:47.275926   32399 out.go:177]   - NO_PROXY=192.168.39.17
	W0815 17:29:47.277082   32399 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 17:29:47.277107   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:47.277550   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:47.277746   32399 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:29:47.277960   32399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	W0815 17:29:47.277974   32399 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 17:29:47.278006   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:47.278044   32399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:29:47.278062   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:29:47.280307   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.280618   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.280646   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.280744   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:47.280853   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.280927   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:47.281109   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:47.281289   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:47.281310   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:47.281310   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	I0815 17:29:47.281492   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:29:47.281635   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:29:47.281781   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:29:47.281957   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	I0815 17:29:47.515530   32399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 17:29:47.522744   32399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 17:29:47.522812   32399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:29:47.539055   32399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 17:29:47.539076   32399 start.go:495] detecting cgroup driver to use...
	I0815 17:29:47.539150   32399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:29:47.554077   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:29:47.568541   32399 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:29:47.568586   32399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:29:47.582023   32399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:29:47.596357   32399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:29:47.712007   32399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:29:47.860743   32399 docker.go:233] disabling docker service ...
	I0815 17:29:47.860809   32399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:29:47.875352   32399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:29:47.888137   32399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:29:48.018622   32399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:29:48.148043   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:29:48.161831   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:29:48.179989   32399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:29:48.180042   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.190999   32399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:29:48.191066   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.201369   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.211934   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.222160   32399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:29:48.232612   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.243359   32399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.260510   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:29:48.270772   32399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:29:48.280123   32399 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 17:29:48.280168   32399 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 17:29:48.293848   32399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:29:48.302741   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:29:48.435828   32399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:29:48.589360   32399 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:29:48.589426   32399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:29:48.594369   32399 start.go:563] Will wait 60s for crictl version
	I0815 17:29:48.594428   32399 ssh_runner.go:195] Run: which crictl
	I0815 17:29:48.598223   32399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:29:48.646876   32399 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 17:29:48.646947   32399 ssh_runner.go:195] Run: crio --version
	I0815 17:29:48.681369   32399 ssh_runner.go:195] Run: crio --version
	I0815 17:29:48.714467   32399 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 17:29:48.715567   32399 out.go:177]   - env NO_PROXY=192.168.39.17
	I0815 17:29:48.716731   32399 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:29:48.719344   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:48.719801   32399 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:29:36 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:29:48.719829   32399 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:29:48.720036   32399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 17:29:48.724723   32399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:29:48.737127   32399 mustload.go:65] Loading cluster: ha-683878
	I0815 17:29:48.737342   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:29:48.737704   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:48.737734   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:48.751772   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38287
	I0815 17:29:48.752196   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:48.752663   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:48.752686   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:48.752989   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:48.753182   32399 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:29:48.754599   32399 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:29:48.754985   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:29:48.755028   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:29:48.768922   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37307
	I0815 17:29:48.769249   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:29:48.769642   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:29:48.769661   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:29:48.769914   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:29:48.770078   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:29:48.770229   32399 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878 for IP: 192.168.39.232
	I0815 17:29:48.770244   32399 certs.go:194] generating shared ca certs ...
	I0815 17:29:48.770260   32399 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:48.770399   32399 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 17:29:48.770448   32399 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 17:29:48.770464   32399 certs.go:256] generating profile certs ...
	I0815 17:29:48.770559   32399 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key
	I0815 17:29:48.770590   32399 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.faf4606f
	I0815 17:29:48.770608   32399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.faf4606f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.17 192.168.39.232 192.168.39.254]
	I0815 17:29:49.003509   32399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.faf4606f ...
	I0815 17:29:49.003550   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.faf4606f: {Name:mk9b4d24b176a74aaa3c6d56b9fc54abe622fa6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:49.003731   32399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.faf4606f ...
	I0815 17:29:49.003746   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.faf4606f: {Name:mk72d614c186e223591fe67bed0c6e945b20bee6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:29:49.003821   32399 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.faf4606f -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt
	I0815 17:29:49.003952   32399 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.faf4606f -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key
	I0815 17:29:49.004079   32399 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key
	I0815 17:29:49.004094   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 17:29:49.004107   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 17:29:49.004119   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 17:29:49.004132   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 17:29:49.004145   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 17:29:49.004157   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 17:29:49.004167   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 17:29:49.004179   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 17:29:49.004225   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 17:29:49.004254   32399 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 17:29:49.004263   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 17:29:49.004285   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 17:29:49.004308   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:29:49.004330   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 17:29:49.004366   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:29:49.004394   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:49.004408   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem -> /usr/share/ca-certificates/20219.pem
	I0815 17:29:49.004422   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /usr/share/ca-certificates/202192.pem
	I0815 17:29:49.004452   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:29:49.007270   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:49.007676   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:29:49.007704   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:29:49.007892   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:29:49.008045   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:29:49.008177   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:29:49.008302   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:29:49.076853   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 17:29:49.081397   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 17:29:49.092530   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 17:29:49.096710   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 17:29:49.111800   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 17:29:49.121752   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 17:29:49.134310   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 17:29:49.138987   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 17:29:49.151077   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 17:29:49.155430   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 17:29:49.166575   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 17:29:49.171681   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 17:29:49.189127   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:29:49.217832   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:29:49.243283   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:29:49.268540   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 17:29:49.291304   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0815 17:29:49.315317   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 17:29:49.339192   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:29:49.363621   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:29:49.387021   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:29:49.413451   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 17:29:49.436995   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 17:29:49.464385   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 17:29:49.482364   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 17:29:49.499948   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 17:29:49.517811   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 17:29:49.535604   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 17:29:49.553537   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 17:29:49.572141   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 17:29:49.590686   32399 ssh_runner.go:195] Run: openssl version
	I0815 17:29:49.596675   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 17:29:49.607790   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 17:29:49.612457   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 17:29:49.612512   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 17:29:49.618479   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 17:29:49.629534   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 17:29:49.640409   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 17:29:49.644843   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 17:29:49.644886   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 17:29:49.650947   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:29:49.661767   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:29:49.672322   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:49.677324   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:49.677425   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:29:49.683052   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:29:49.693489   32399 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:29:49.697544   32399 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 17:29:49.697590   32399 kubeadm.go:934] updating node {m02 192.168.39.232 8443 v1.31.0 crio true true} ...
	I0815 17:29:49.697676   32399 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683878-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:29:49.697706   32399 kube-vip.go:115] generating kube-vip config ...
	I0815 17:29:49.697739   32399 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 17:29:49.713566   32399 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 17:29:49.713656   32399 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 17:29:49.713717   32399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:29:49.724044   32399 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0815 17:29:49.724103   32399 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0815 17:29:49.735786   32399 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0815 17:29:49.735817   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 17:29:49.735818   32399 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubelet
	I0815 17:29:49.735828   32399 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubeadm
	I0815 17:29:49.735893   32399 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 17:29:49.740251   32399 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0815 17:29:49.740277   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0815 17:30:32.983649   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 17:30:32.983736   32399 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 17:30:32.991064   32399 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0815 17:30:32.991097   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0815 17:30:44.468061   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:30:44.483663   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 17:30:44.483769   32399 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 17:30:44.488170   32399 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0815 17:30:44.488205   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0815 17:30:44.807916   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 17:30:44.818008   32399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0815 17:30:44.834894   32399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:30:44.852162   32399 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 17:30:44.868384   32399 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 17:30:44.872949   32399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:30:44.885070   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:30:45.018161   32399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:30:45.035336   32399 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:30:45.035674   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:30:45.035708   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:30:45.050682   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39229
	I0815 17:30:45.051061   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:30:45.051458   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:30:45.051477   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:30:45.051763   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:30:45.051952   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:30:45.052130   32399 start.go:317] joinCluster: &{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:30:45.052260   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0815 17:30:45.052282   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:30:45.055414   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:30:45.055809   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:30:45.055841   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:30:45.056090   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:30:45.056283   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:30:45.056449   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:30:45.056605   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:30:45.218795   32399 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:30:45.218836   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv6pe0.d3ubsmvhon2dbywh --discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683878-m02 --control-plane --apiserver-advertise-address=192.168.39.232 --apiserver-bind-port=8443"
	I0815 17:31:04.724973   32399 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vv6pe0.d3ubsmvhon2dbywh --discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683878-m02 --control-plane --apiserver-advertise-address=192.168.39.232 --apiserver-bind-port=8443": (19.506108229s)
	I0815 17:31:04.725004   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0815 17:31:05.278404   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-683878-m02 minikube.k8s.io/updated_at=2024_08_15T17_31_05_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=ha-683878 minikube.k8s.io/primary=false
	I0815 17:31:05.420275   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-683878-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0815 17:31:05.549212   32399 start.go:319] duration metric: took 20.497080312s to joinCluster
	I0815 17:31:05.549300   32399 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:31:05.549584   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:31:05.550772   32399 out.go:177] * Verifying Kubernetes components...
	I0815 17:31:05.551934   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:31:05.807088   32399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:31:05.877001   32399 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:31:05.877276   32399 kapi.go:59] client config for ha-683878: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.crt", KeyFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key", CAFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 17:31:05.877345   32399 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.17:8443
	I0815 17:31:05.877595   32399 node_ready.go:35] waiting up to 6m0s for node "ha-683878-m02" to be "Ready" ...
	I0815 17:31:05.877697   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:05.877708   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:05.877716   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:05.877721   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:05.909815   32399 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0815 17:31:06.377785   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:06.377807   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:06.377819   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:06.377824   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:06.384312   32399 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 17:31:06.878784   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:06.878808   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:06.878816   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:06.878822   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:06.884587   32399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 17:31:07.378460   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:07.378483   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:07.378491   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:07.378496   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:07.382403   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:07.878654   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:07.878676   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:07.878685   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:07.878693   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:07.941504   32399 round_trippers.go:574] Response Status: 200 OK in 62 milliseconds
	I0815 17:31:07.943250   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:08.378618   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:08.378638   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:08.378647   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:08.378650   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:08.382939   32399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 17:31:08.877735   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:08.877756   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:08.877764   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:08.877769   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:08.881592   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:09.378262   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:09.378282   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:09.378293   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:09.378298   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:09.381556   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:09.877768   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:09.877791   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:09.877799   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:09.877802   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:09.881146   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:10.378773   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:10.378795   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:10.378806   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:10.378810   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:10.381868   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:10.382620   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:10.877936   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:10.877965   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:10.877978   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:10.877986   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:10.881408   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:11.377977   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:11.378001   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:11.378013   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:11.378017   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:11.381359   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:11.878825   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:11.878852   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:11.878864   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:11.878874   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:11.882329   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:12.377799   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:12.377892   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:12.377913   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:12.377925   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:12.392435   32399 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0815 17:31:12.393503   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:12.877747   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:12.877766   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:12.877774   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:12.877778   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:12.880867   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:13.377922   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:13.377946   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:13.377955   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:13.377959   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:13.381623   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:13.878175   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:13.878197   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:13.878205   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:13.878209   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:13.881562   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:14.378606   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:14.378632   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:14.378644   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:14.378652   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:14.382072   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:14.878502   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:14.878527   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:14.878534   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:14.878539   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:14.881964   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:14.882629   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:15.377784   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:15.377805   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:15.377814   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:15.377818   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:15.381157   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:15.878238   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:15.878262   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:15.878270   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:15.878273   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:15.882003   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:16.377958   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:16.377986   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:16.377998   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:16.378003   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:16.381608   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:16.878275   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:16.878301   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:16.878312   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:16.878318   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:16.881211   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:17.378777   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:17.378800   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:17.378810   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:17.378814   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:17.382275   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:17.382984   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:17.878364   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:17.878385   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:17.878392   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:17.878400   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:17.881699   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:18.378569   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:18.378590   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:18.378597   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:18.378601   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:18.381821   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:18.878793   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:18.878818   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:18.878826   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:18.878831   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:18.882150   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:19.378233   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:19.378257   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:19.378267   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:19.378274   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:19.381782   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:19.877813   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:19.877835   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:19.877845   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:19.877852   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:19.881346   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:19.881959   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:20.378057   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:20.378089   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:20.378097   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:20.378101   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:20.381238   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:20.878689   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:20.878712   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:20.878720   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:20.878725   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:20.882140   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:21.378158   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:21.378186   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:21.378197   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:21.378200   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:21.381672   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:21.878435   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:21.878462   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:21.878473   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:21.878480   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:21.881543   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:21.882310   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:22.378428   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:22.378452   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:22.378463   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:22.378469   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:22.381974   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:22.877776   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:22.877797   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:22.877805   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:22.877810   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:22.881437   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:23.378541   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:23.378563   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:23.378571   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:23.378576   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:23.381756   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:23.878715   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:23.878737   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:23.878744   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:23.878748   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:23.882112   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:23.882629   32399 node_ready.go:53] node "ha-683878-m02" has status "Ready":"False"
	I0815 17:31:24.377972   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:24.378000   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.378022   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.378031   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.380977   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.381546   32399 node_ready.go:49] node "ha-683878-m02" has status "Ready":"True"
	I0815 17:31:24.381563   32399 node_ready.go:38] duration metric: took 18.503951636s for node "ha-683878-m02" to be "Ready" ...
	I0815 17:31:24.381571   32399 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:31:24.381635   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:31:24.381643   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.381650   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.381655   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.385491   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:24.393320   32399 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-c5mlj" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.393407   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-c5mlj
	I0815 17:31:24.393419   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.393428   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.393433   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.396623   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:24.397383   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:24.397396   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.397403   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.397406   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.399814   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.400377   32399 pod_ready.go:93] pod "coredns-6f6b679f8f-c5mlj" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:24.400401   32399 pod_ready.go:82] duration metric: took 7.055742ms for pod "coredns-6f6b679f8f-c5mlj" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.400413   32399 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-kfczp" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.400472   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-kfczp
	I0815 17:31:24.400482   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.400507   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.400519   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.402522   32399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:31:24.403273   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:24.403288   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.403294   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.403300   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.405426   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.406015   32399 pod_ready.go:93] pod "coredns-6f6b679f8f-kfczp" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:24.406034   32399 pod_ready.go:82] duration metric: took 5.613674ms for pod "coredns-6f6b679f8f-kfczp" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.406047   32399 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.406103   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683878
	I0815 17:31:24.406113   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.406123   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.406129   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.408178   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.408621   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:24.408633   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.408639   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.408645   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.411050   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.411451   32399 pod_ready.go:93] pod "etcd-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:24.411463   32399 pod_ready.go:82] duration metric: took 5.409665ms for pod "etcd-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.411470   32399 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.411506   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683878-m02
	I0815 17:31:24.411513   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.411519   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.411525   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.414256   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.415219   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:24.415231   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.415237   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.415242   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.417101   32399 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0815 17:31:24.417673   32399 pod_ready.go:93] pod "etcd-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:24.417691   32399 pod_ready.go:82] duration metric: took 6.215712ms for pod "etcd-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.417703   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.578027   32399 request.go:632] Waited for 160.263351ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878
	I0815 17:31:24.578084   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878
	I0815 17:31:24.578090   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.578100   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.578109   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.581871   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:24.778898   32399 request.go:632] Waited for 196.360876ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:24.778945   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:24.778949   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.778975   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.778981   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.781919   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:24.782450   32399 pod_ready.go:93] pod "kube-apiserver-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:24.782468   32399 pod_ready.go:82] duration metric: took 364.758957ms for pod "kube-apiserver-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.782478   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:24.978201   32399 request.go:632] Waited for 195.643943ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878-m02
	I0815 17:31:24.978257   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878-m02
	I0815 17:31:24.978262   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:24.978271   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:24.978274   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:24.981594   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:25.178827   32399 request.go:632] Waited for 196.398405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:25.178907   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:25.178916   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:25.178924   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:25.178931   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:25.181476   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:25.182346   32399 pod_ready.go:93] pod "kube-apiserver-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:25.182365   32399 pod_ready.go:82] duration metric: took 399.878796ms for pod "kube-apiserver-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:25.182375   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:25.378488   32399 request.go:632] Waited for 196.025457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878
	I0815 17:31:25.378611   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878
	I0815 17:31:25.378624   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:25.378637   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:25.378644   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:25.382024   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:25.578988   32399 request.go:632] Waited for 196.379866ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:25.579052   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:25.579060   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:25.579071   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:25.579077   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:25.582263   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:25.582801   32399 pod_ready.go:93] pod "kube-controller-manager-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:25.582817   32399 pod_ready.go:82] duration metric: took 400.436209ms for pod "kube-controller-manager-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:25.582826   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:25.778943   32399 request.go:632] Waited for 196.055441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878-m02
	I0815 17:31:25.779009   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878-m02
	I0815 17:31:25.779014   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:25.779022   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:25.779028   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:25.782312   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:25.978321   32399 request.go:632] Waited for 195.368316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:25.978371   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:25.978376   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:25.978384   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:25.978392   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:25.981546   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:25.982137   32399 pod_ready.go:93] pod "kube-controller-manager-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:25.982154   32399 pod_ready.go:82] duration metric: took 399.321147ms for pod "kube-controller-manager-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:25.982168   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-89p4v" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:26.178409   32399 request.go:632] Waited for 196.141898ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89p4v
	I0815 17:31:26.178472   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89p4v
	I0815 17:31:26.178480   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:26.178491   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:26.178504   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:26.181996   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:26.379038   32399 request.go:632] Waited for 196.398272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:26.379118   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:26.379124   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:26.379134   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:26.379150   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:26.382230   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:26.382716   32399 pod_ready.go:93] pod "kube-proxy-89p4v" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:26.382733   32399 pod_ready.go:82] duration metric: took 400.551386ms for pod "kube-proxy-89p4v" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:26.382743   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s9hw4" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:26.578961   32399 request.go:632] Waited for 196.131977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9hw4
	I0815 17:31:26.579028   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9hw4
	I0815 17:31:26.579036   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:26.579046   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:26.579056   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:26.581979   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:31:26.779018   32399 request.go:632] Waited for 196.364938ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:26.779076   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:26.779083   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:26.779092   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:26.779100   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:26.782152   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:26.782737   32399 pod_ready.go:93] pod "kube-proxy-s9hw4" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:26.782752   32399 pod_ready.go:82] duration metric: took 400.003294ms for pod "kube-proxy-s9hw4" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:26.782762   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:26.978870   32399 request.go:632] Waited for 196.03424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878
	I0815 17:31:26.978922   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878
	I0815 17:31:26.978927   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:26.978934   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:26.978938   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:26.982257   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:27.178070   32399 request.go:632] Waited for 195.308344ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:27.178126   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:31:27.178131   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:27.178146   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:27.178165   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:27.182717   32399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 17:31:27.183320   32399 pod_ready.go:93] pod "kube-scheduler-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:27.183339   32399 pod_ready.go:82] duration metric: took 400.572354ms for pod "kube-scheduler-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:27.183349   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:27.378379   32399 request.go:632] Waited for 194.971084ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878-m02
	I0815 17:31:27.378465   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878-m02
	I0815 17:31:27.378474   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:27.378490   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:27.378499   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:27.382012   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:27.579011   32399 request.go:632] Waited for 196.360788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:27.579097   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:31:27.579103   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:27.579111   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:27.579119   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:27.582296   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:27.583177   32399 pod_ready.go:93] pod "kube-scheduler-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:31:27.583203   32399 pod_ready.go:82] duration metric: took 399.846324ms for pod "kube-scheduler-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:31:27.583218   32399 pod_ready.go:39] duration metric: took 3.201632019s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:31:27.583247   32399 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:31:27.583302   32399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:31:27.599424   32399 api_server.go:72] duration metric: took 22.050081502s to wait for apiserver process to appear ...
	I0815 17:31:27.599446   32399 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:31:27.599473   32399 api_server.go:253] Checking apiserver healthz at https://192.168.39.17:8443/healthz ...
	I0815 17:31:27.603735   32399 api_server.go:279] https://192.168.39.17:8443/healthz returned 200:
	ok
	I0815 17:31:27.603811   32399 round_trippers.go:463] GET https://192.168.39.17:8443/version
	I0815 17:31:27.603822   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:27.603832   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:27.603840   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:27.604623   32399 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 17:31:27.604741   32399 api_server.go:141] control plane version: v1.31.0
	I0815 17:31:27.604759   32399 api_server.go:131] duration metric: took 5.305274ms to wait for apiserver health ...
	I0815 17:31:27.604768   32399 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 17:31:27.778083   32399 request.go:632] Waited for 173.246664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:31:27.778137   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:31:27.778142   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:27.778150   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:27.778152   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:27.782656   32399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 17:31:27.787187   32399 system_pods.go:59] 17 kube-system pods found
	I0815 17:31:27.787235   32399 system_pods.go:61] "coredns-6f6b679f8f-c5mlj" [24146559-ea1d-42db-9f61-730ed436dea8] Running
	I0815 17:31:27.787245   32399 system_pods.go:61] "coredns-6f6b679f8f-kfczp" [5d18cfeb-ccfe-4432-b999-510d84438c7a] Running
	I0815 17:31:27.787251   32399 system_pods.go:61] "etcd-ha-683878" [89164a36-1867-4d3e-8b16-4b6e3f5735d9] Running
	I0815 17:31:27.787257   32399 system_pods.go:61] "etcd-ha-683878-m02" [ffd47718-50f2-42b0-8759-390d981a69b8] Running
	I0815 17:31:27.787262   32399 system_pods.go:61] "kindnet-g8lqf" [bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e] Running
	I0815 17:31:27.787268   32399 system_pods.go:61] "kindnet-z5z9h" [525522f9-4aef-49ae-9f3f-02960fe82bff] Running
	I0815 17:31:27.787275   32399 system_pods.go:61] "kube-apiserver-ha-683878" [265e1832-cd30-4ba1-9aa5-5e18cd71e8f0] Running
	I0815 17:31:27.787279   32399 system_pods.go:61] "kube-apiserver-ha-683878-m02" [bff6c9d5-5c64-4220-9a17-f3f08b8e5dab] Running
	I0815 17:31:27.787287   32399 system_pods.go:61] "kube-controller-manager-ha-683878" [e958c9a5-cf23-4d1a-bf25-ab03393607cb] Running
	I0815 17:31:27.787290   32399 system_pods.go:61] "kube-controller-manager-ha-683878-m02" [fa5ae940-8a2a-4a4c-950c-5fe267cddc2d] Running
	I0815 17:31:27.787293   32399 system_pods.go:61] "kube-proxy-89p4v" [58c774bf-7b9a-46ad-8d85-81df9b68415a] Running
	I0815 17:31:27.787296   32399 system_pods.go:61] "kube-proxy-s9hw4" [f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1] Running
	I0815 17:31:27.787299   32399 system_pods.go:61] "kube-scheduler-ha-683878" [fe51d20e-6174-48c9-b170-2eff952a4975] Running
	I0815 17:31:27.787303   32399 system_pods.go:61] "kube-scheduler-ha-683878-m02" [bb94ccf5-231f-4bb5-903d-8664be14bc58] Running
	I0815 17:31:27.787306   32399 system_pods.go:61] "kube-vip-ha-683878" [9c4a5acc-022d-4756-a0c4-6a867b22f0bb] Running
	I0815 17:31:27.787309   32399 system_pods.go:61] "kube-vip-ha-683878-m02" [041e7349-ab7d-4b80-9f0d-ea92f61d637b] Running
	I0815 17:31:27.787312   32399 system_pods.go:61] "storage-provisioner" [78d884cc-a5c3-4f94-b643-b6593cb3f622] Running
	I0815 17:31:27.787318   32399 system_pods.go:74] duration metric: took 182.543913ms to wait for pod list to return data ...
	I0815 17:31:27.787325   32399 default_sa.go:34] waiting for default service account to be created ...
	I0815 17:31:27.978749   32399 request.go:632] Waited for 191.333158ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/default/serviceaccounts
	I0815 17:31:27.978827   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/default/serviceaccounts
	I0815 17:31:27.978833   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:27.978840   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:27.978844   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:27.982849   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:27.983158   32399 default_sa.go:45] found service account: "default"
	I0815 17:31:27.983178   32399 default_sa.go:55] duration metric: took 195.845847ms for default service account to be created ...
	I0815 17:31:27.983186   32399 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 17:31:28.178628   32399 request.go:632] Waited for 195.36887ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:31:28.178691   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:31:28.178698   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:28.178710   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:28.178715   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:28.184296   32399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 17:31:28.188703   32399 system_pods.go:86] 17 kube-system pods found
	I0815 17:31:28.188726   32399 system_pods.go:89] "coredns-6f6b679f8f-c5mlj" [24146559-ea1d-42db-9f61-730ed436dea8] Running
	I0815 17:31:28.188733   32399 system_pods.go:89] "coredns-6f6b679f8f-kfczp" [5d18cfeb-ccfe-4432-b999-510d84438c7a] Running
	I0815 17:31:28.188737   32399 system_pods.go:89] "etcd-ha-683878" [89164a36-1867-4d3e-8b16-4b6e3f5735d9] Running
	I0815 17:31:28.188741   32399 system_pods.go:89] "etcd-ha-683878-m02" [ffd47718-50f2-42b0-8759-390d981a69b8] Running
	I0815 17:31:28.188745   32399 system_pods.go:89] "kindnet-g8lqf" [bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e] Running
	I0815 17:31:28.188748   32399 system_pods.go:89] "kindnet-z5z9h" [525522f9-4aef-49ae-9f3f-02960fe82bff] Running
	I0815 17:31:28.188751   32399 system_pods.go:89] "kube-apiserver-ha-683878" [265e1832-cd30-4ba1-9aa5-5e18cd71e8f0] Running
	I0815 17:31:28.188755   32399 system_pods.go:89] "kube-apiserver-ha-683878-m02" [bff6c9d5-5c64-4220-9a17-f3f08b8e5dab] Running
	I0815 17:31:28.188759   32399 system_pods.go:89] "kube-controller-manager-ha-683878" [e958c9a5-cf23-4d1a-bf25-ab03393607cb] Running
	I0815 17:31:28.188762   32399 system_pods.go:89] "kube-controller-manager-ha-683878-m02" [fa5ae940-8a2a-4a4c-950c-5fe267cddc2d] Running
	I0815 17:31:28.188765   32399 system_pods.go:89] "kube-proxy-89p4v" [58c774bf-7b9a-46ad-8d85-81df9b68415a] Running
	I0815 17:31:28.188769   32399 system_pods.go:89] "kube-proxy-s9hw4" [f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1] Running
	I0815 17:31:28.188773   32399 system_pods.go:89] "kube-scheduler-ha-683878" [fe51d20e-6174-48c9-b170-2eff952a4975] Running
	I0815 17:31:28.188777   32399 system_pods.go:89] "kube-scheduler-ha-683878-m02" [bb94ccf5-231f-4bb5-903d-8664be14bc58] Running
	I0815 17:31:28.188781   32399 system_pods.go:89] "kube-vip-ha-683878" [9c4a5acc-022d-4756-a0c4-6a867b22f0bb] Running
	I0815 17:31:28.188783   32399 system_pods.go:89] "kube-vip-ha-683878-m02" [041e7349-ab7d-4b80-9f0d-ea92f61d637b] Running
	I0815 17:31:28.188786   32399 system_pods.go:89] "storage-provisioner" [78d884cc-a5c3-4f94-b643-b6593cb3f622] Running
	I0815 17:31:28.188792   32399 system_pods.go:126] duration metric: took 205.601444ms to wait for k8s-apps to be running ...
	I0815 17:31:28.188807   32399 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 17:31:28.188848   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:31:28.203886   32399 system_svc.go:56] duration metric: took 15.072972ms WaitForService to wait for kubelet
	I0815 17:31:28.203906   32399 kubeadm.go:582] duration metric: took 22.654565633s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:31:28.203923   32399 node_conditions.go:102] verifying NodePressure condition ...
	I0815 17:31:28.378303   32399 request.go:632] Waited for 174.316248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes
	I0815 17:31:28.378368   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes
	I0815 17:31:28.378373   32399 round_trippers.go:469] Request Headers:
	I0815 17:31:28.378381   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:31:28.378390   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:31:28.382309   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:31:28.383084   32399 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 17:31:28.383108   32399 node_conditions.go:123] node cpu capacity is 2
	I0815 17:31:28.383120   32399 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 17:31:28.383125   32399 node_conditions.go:123] node cpu capacity is 2
	I0815 17:31:28.383129   32399 node_conditions.go:105] duration metric: took 179.202113ms to run NodePressure ...
	I0815 17:31:28.383140   32399 start.go:241] waiting for startup goroutines ...
	I0815 17:31:28.383161   32399 start.go:255] writing updated cluster config ...
	I0815 17:31:28.385481   32399 out.go:201] 
	I0815 17:31:28.386981   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:31:28.387062   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:31:28.388679   32399 out.go:177] * Starting "ha-683878-m03" control-plane node in "ha-683878" cluster
	I0815 17:31:28.389829   32399 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:31:28.389850   32399 cache.go:56] Caching tarball of preloaded images
	I0815 17:31:28.389955   32399 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:31:28.389968   32399 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:31:28.390045   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:31:28.390206   32399 start.go:360] acquireMachinesLock for ha-683878-m03: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:31:28.390248   32399 start.go:364] duration metric: took 23.302µs to acquireMachinesLock for "ha-683878-m03"
	I0815 17:31:28.390270   32399 start.go:93] Provisioning new machine with config: &{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:31:28.390353   32399 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0815 17:31:28.391973   32399 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 17:31:28.392052   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:31:28.392085   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:31:28.407053   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37249
	I0815 17:31:28.407503   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:31:28.407917   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:31:28.407934   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:31:28.408205   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:31:28.408366   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetMachineName
	I0815 17:31:28.408515   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:28.408642   32399 start.go:159] libmachine.API.Create for "ha-683878" (driver="kvm2")
	I0815 17:31:28.408671   32399 client.go:168] LocalClient.Create starting
	I0815 17:31:28.408703   32399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem
	I0815 17:31:28.408740   32399 main.go:141] libmachine: Decoding PEM data...
	I0815 17:31:28.408763   32399 main.go:141] libmachine: Parsing certificate...
	I0815 17:31:28.408826   32399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem
	I0815 17:31:28.408852   32399 main.go:141] libmachine: Decoding PEM data...
	I0815 17:31:28.408869   32399 main.go:141] libmachine: Parsing certificate...
	I0815 17:31:28.408896   32399 main.go:141] libmachine: Running pre-create checks...
	I0815 17:31:28.408909   32399 main.go:141] libmachine: (ha-683878-m03) Calling .PreCreateCheck
	I0815 17:31:28.409034   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetConfigRaw
	I0815 17:31:28.409344   32399 main.go:141] libmachine: Creating machine...
	I0815 17:31:28.409358   32399 main.go:141] libmachine: (ha-683878-m03) Calling .Create
	I0815 17:31:28.409457   32399 main.go:141] libmachine: (ha-683878-m03) Creating KVM machine...
	I0815 17:31:28.410578   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found existing default KVM network
	I0815 17:31:28.410708   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found existing private KVM network mk-ha-683878
	I0815 17:31:28.410885   32399 main.go:141] libmachine: (ha-683878-m03) Setting up store path in /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03 ...
	I0815 17:31:28.410909   32399 main.go:141] libmachine: (ha-683878-m03) Building disk image from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 17:31:28.410966   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:28.410873   33363 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:31:28.411042   32399 main.go:141] libmachine: (ha-683878-m03) Downloading /home/jenkins/minikube-integration/19450-13013/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 17:31:28.631760   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:28.631601   33363 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa...
	I0815 17:31:28.717652   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:28.717528   33363 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/ha-683878-m03.rawdisk...
	I0815 17:31:28.717687   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Writing magic tar header
	I0815 17:31:28.717701   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Writing SSH key tar header
	I0815 17:31:28.717713   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:28.717641   33363 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03 ...
	I0815 17:31:28.717732   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03
	I0815 17:31:28.717808   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines
	I0815 17:31:28.717828   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:31:28.717837   32399 main.go:141] libmachine: (ha-683878-m03) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03 (perms=drwx------)
	I0815 17:31:28.717844   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013
	I0815 17:31:28.717853   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 17:31:28.717860   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home/jenkins
	I0815 17:31:28.717866   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Checking permissions on dir: /home
	I0815 17:31:28.717873   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Skipping /home - not owner
	I0815 17:31:28.717884   32399 main.go:141] libmachine: (ha-683878-m03) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines (perms=drwxr-xr-x)
	I0815 17:31:28.717893   32399 main.go:141] libmachine: (ha-683878-m03) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube (perms=drwxr-xr-x)
	I0815 17:31:28.717912   32399 main.go:141] libmachine: (ha-683878-m03) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013 (perms=drwxrwxr-x)
	I0815 17:31:28.717937   32399 main.go:141] libmachine: (ha-683878-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 17:31:28.717951   32399 main.go:141] libmachine: (ha-683878-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 17:31:28.717959   32399 main.go:141] libmachine: (ha-683878-m03) Creating domain...
	I0815 17:31:28.718766   32399 main.go:141] libmachine: (ha-683878-m03) define libvirt domain using xml: 
	I0815 17:31:28.718785   32399 main.go:141] libmachine: (ha-683878-m03) <domain type='kvm'>
	I0815 17:31:28.718795   32399 main.go:141] libmachine: (ha-683878-m03)   <name>ha-683878-m03</name>
	I0815 17:31:28.718803   32399 main.go:141] libmachine: (ha-683878-m03)   <memory unit='MiB'>2200</memory>
	I0815 17:31:28.718813   32399 main.go:141] libmachine: (ha-683878-m03)   <vcpu>2</vcpu>
	I0815 17:31:28.718825   32399 main.go:141] libmachine: (ha-683878-m03)   <features>
	I0815 17:31:28.718832   32399 main.go:141] libmachine: (ha-683878-m03)     <acpi/>
	I0815 17:31:28.718841   32399 main.go:141] libmachine: (ha-683878-m03)     <apic/>
	I0815 17:31:28.718849   32399 main.go:141] libmachine: (ha-683878-m03)     <pae/>
	I0815 17:31:28.718860   32399 main.go:141] libmachine: (ha-683878-m03)     
	I0815 17:31:28.718875   32399 main.go:141] libmachine: (ha-683878-m03)   </features>
	I0815 17:31:28.718885   32399 main.go:141] libmachine: (ha-683878-m03)   <cpu mode='host-passthrough'>
	I0815 17:31:28.718897   32399 main.go:141] libmachine: (ha-683878-m03)   
	I0815 17:31:28.718907   32399 main.go:141] libmachine: (ha-683878-m03)   </cpu>
	I0815 17:31:28.718919   32399 main.go:141] libmachine: (ha-683878-m03)   <os>
	I0815 17:31:28.718933   32399 main.go:141] libmachine: (ha-683878-m03)     <type>hvm</type>
	I0815 17:31:28.718945   32399 main.go:141] libmachine: (ha-683878-m03)     <boot dev='cdrom'/>
	I0815 17:31:28.718955   32399 main.go:141] libmachine: (ha-683878-m03)     <boot dev='hd'/>
	I0815 17:31:28.718963   32399 main.go:141] libmachine: (ha-683878-m03)     <bootmenu enable='no'/>
	I0815 17:31:28.718972   32399 main.go:141] libmachine: (ha-683878-m03)   </os>
	I0815 17:31:28.718981   32399 main.go:141] libmachine: (ha-683878-m03)   <devices>
	I0815 17:31:28.718991   32399 main.go:141] libmachine: (ha-683878-m03)     <disk type='file' device='cdrom'>
	I0815 17:31:28.719007   32399 main.go:141] libmachine: (ha-683878-m03)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/boot2docker.iso'/>
	I0815 17:31:28.719018   32399 main.go:141] libmachine: (ha-683878-m03)       <target dev='hdc' bus='scsi'/>
	I0815 17:31:28.719024   32399 main.go:141] libmachine: (ha-683878-m03)       <readonly/>
	I0815 17:31:28.719029   32399 main.go:141] libmachine: (ha-683878-m03)     </disk>
	I0815 17:31:28.719035   32399 main.go:141] libmachine: (ha-683878-m03)     <disk type='file' device='disk'>
	I0815 17:31:28.719043   32399 main.go:141] libmachine: (ha-683878-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 17:31:28.719051   32399 main.go:141] libmachine: (ha-683878-m03)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/ha-683878-m03.rawdisk'/>
	I0815 17:31:28.719058   32399 main.go:141] libmachine: (ha-683878-m03)       <target dev='hda' bus='virtio'/>
	I0815 17:31:28.719063   32399 main.go:141] libmachine: (ha-683878-m03)     </disk>
	I0815 17:31:28.719069   32399 main.go:141] libmachine: (ha-683878-m03)     <interface type='network'>
	I0815 17:31:28.719079   32399 main.go:141] libmachine: (ha-683878-m03)       <source network='mk-ha-683878'/>
	I0815 17:31:28.719086   32399 main.go:141] libmachine: (ha-683878-m03)       <model type='virtio'/>
	I0815 17:31:28.719112   32399 main.go:141] libmachine: (ha-683878-m03)     </interface>
	I0815 17:31:28.719135   32399 main.go:141] libmachine: (ha-683878-m03)     <interface type='network'>
	I0815 17:31:28.719151   32399 main.go:141] libmachine: (ha-683878-m03)       <source network='default'/>
	I0815 17:31:28.719162   32399 main.go:141] libmachine: (ha-683878-m03)       <model type='virtio'/>
	I0815 17:31:28.719172   32399 main.go:141] libmachine: (ha-683878-m03)     </interface>
	I0815 17:31:28.719179   32399 main.go:141] libmachine: (ha-683878-m03)     <serial type='pty'>
	I0815 17:31:28.719184   32399 main.go:141] libmachine: (ha-683878-m03)       <target port='0'/>
	I0815 17:31:28.719193   32399 main.go:141] libmachine: (ha-683878-m03)     </serial>
	I0815 17:31:28.719203   32399 main.go:141] libmachine: (ha-683878-m03)     <console type='pty'>
	I0815 17:31:28.719215   32399 main.go:141] libmachine: (ha-683878-m03)       <target type='serial' port='0'/>
	I0815 17:31:28.719224   32399 main.go:141] libmachine: (ha-683878-m03)     </console>
	I0815 17:31:28.719235   32399 main.go:141] libmachine: (ha-683878-m03)     <rng model='virtio'>
	I0815 17:31:28.719249   32399 main.go:141] libmachine: (ha-683878-m03)       <backend model='random'>/dev/random</backend>
	I0815 17:31:28.719270   32399 main.go:141] libmachine: (ha-683878-m03)     </rng>
	I0815 17:31:28.719280   32399 main.go:141] libmachine: (ha-683878-m03)     
	I0815 17:31:28.719288   32399 main.go:141] libmachine: (ha-683878-m03)     
	I0815 17:31:28.719297   32399 main.go:141] libmachine: (ha-683878-m03)   </devices>
	I0815 17:31:28.719304   32399 main.go:141] libmachine: (ha-683878-m03) </domain>
	I0815 17:31:28.719318   32399 main.go:141] libmachine: (ha-683878-m03) 
	I0815 17:31:28.725935   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3e:a2:c1 in network default
	I0815 17:31:28.726409   32399 main.go:141] libmachine: (ha-683878-m03) Ensuring networks are active...
	I0815 17:31:28.726427   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:28.727058   32399 main.go:141] libmachine: (ha-683878-m03) Ensuring network default is active
	I0815 17:31:28.727407   32399 main.go:141] libmachine: (ha-683878-m03) Ensuring network mk-ha-683878 is active
	I0815 17:31:28.727832   32399 main.go:141] libmachine: (ha-683878-m03) Getting domain xml...
	I0815 17:31:28.728606   32399 main.go:141] libmachine: (ha-683878-m03) Creating domain...
	I0815 17:31:29.950847   32399 main.go:141] libmachine: (ha-683878-m03) Waiting to get IP...
	I0815 17:31:29.951571   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:29.951964   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:29.951991   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:29.951939   33363 retry.go:31] will retry after 304.500308ms: waiting for machine to come up
	I0815 17:31:30.258371   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:30.258898   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:30.258927   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:30.258847   33363 retry.go:31] will retry after 370.386312ms: waiting for machine to come up
	I0815 17:31:30.630265   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:30.630695   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:30.630717   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:30.630659   33363 retry.go:31] will retry after 429.569597ms: waiting for machine to come up
	I0815 17:31:31.062207   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:31.062738   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:31.062761   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:31.062687   33363 retry.go:31] will retry after 501.692964ms: waiting for machine to come up
	I0815 17:31:31.566268   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:31.566720   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:31.566748   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:31.566659   33363 retry.go:31] will retry after 670.660701ms: waiting for machine to come up
	I0815 17:31:32.238594   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:32.239092   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:32.239118   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:32.239050   33363 retry.go:31] will retry after 896.312096ms: waiting for machine to come up
	I0815 17:31:33.136545   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:33.136915   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:33.136938   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:33.136887   33363 retry.go:31] will retry after 856.407541ms: waiting for machine to come up
	I0815 17:31:33.995449   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:33.995955   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:33.995983   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:33.995903   33363 retry.go:31] will retry after 1.414598205s: waiting for machine to come up
	I0815 17:31:35.412357   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:35.412827   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:35.412859   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:35.412773   33363 retry.go:31] will retry after 1.397444789s: waiting for machine to come up
	I0815 17:31:36.812422   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:36.812840   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:36.812861   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:36.812799   33363 retry.go:31] will retry after 1.619436816s: waiting for machine to come up
	I0815 17:31:38.434084   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:38.434588   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:38.434619   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:38.434529   33363 retry.go:31] will retry after 2.585895781s: waiting for machine to come up
	I0815 17:31:41.021583   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:41.021956   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:41.021986   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:41.021926   33363 retry.go:31] will retry after 3.434031626s: waiting for machine to come up
	I0815 17:31:44.457457   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:44.457897   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:44.457918   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:44.457864   33363 retry.go:31] will retry after 3.461619879s: waiting for machine to come up
	I0815 17:31:47.921569   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:47.922102   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find current IP address of domain ha-683878-m03 in network mk-ha-683878
	I0815 17:31:47.922136   32399 main.go:141] libmachine: (ha-683878-m03) DBG | I0815 17:31:47.921900   33363 retry.go:31] will retry after 5.053292471s: waiting for machine to come up
	I0815 17:31:52.978473   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:52.979031   32399 main.go:141] libmachine: (ha-683878-m03) Found IP for machine: 192.168.39.102
	I0815 17:31:52.979052   32399 main.go:141] libmachine: (ha-683878-m03) Reserving static IP address...
	I0815 17:31:52.979066   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has current primary IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:52.979552   32399 main.go:141] libmachine: (ha-683878-m03) DBG | unable to find host DHCP lease matching {name: "ha-683878-m03", mac: "52:54:00:3c:07:a9", ip: "192.168.39.102"} in network mk-ha-683878
	I0815 17:31:53.052883   32399 main.go:141] libmachine: (ha-683878-m03) Reserved static IP address: 192.168.39.102
	I0815 17:31:53.052915   32399 main.go:141] libmachine: (ha-683878-m03) Waiting for SSH to be available...
	I0815 17:31:53.052925   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Getting to WaitForSSH function...
	I0815 17:31:53.055559   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.055954   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.055985   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.056131   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Using SSH client type: external
	I0815 17:31:53.056160   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa (-rw-------)
	I0815 17:31:53.056881   32399 main.go:141] libmachine: (ha-683878-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.102 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 17:31:53.056905   32399 main.go:141] libmachine: (ha-683878-m03) DBG | About to run SSH command:
	I0815 17:31:53.056921   32399 main.go:141] libmachine: (ha-683878-m03) DBG | exit 0
	I0815 17:31:53.180785   32399 main.go:141] libmachine: (ha-683878-m03) DBG | SSH cmd err, output: <nil>: 
	I0815 17:31:53.181085   32399 main.go:141] libmachine: (ha-683878-m03) KVM machine creation complete!
	I0815 17:31:53.181456   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetConfigRaw
	I0815 17:31:53.182022   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:53.182220   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:53.182371   32399 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 17:31:53.182384   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetState
	I0815 17:31:53.183751   32399 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 17:31:53.183764   32399 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 17:31:53.183770   32399 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 17:31:53.183776   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.186394   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.186831   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.186867   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.187016   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:53.187167   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.187311   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.187459   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:53.187620   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:31:53.187807   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0815 17:31:53.187818   32399 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 17:31:53.291782   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:31:53.291806   32399 main.go:141] libmachine: Detecting the provisioner...
	I0815 17:31:53.291814   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.294620   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.294976   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.294997   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.295230   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:53.295406   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.295564   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.295699   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:53.295846   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:31:53.296019   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0815 17:31:53.296032   32399 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 17:31:53.397359   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 17:31:53.397479   32399 main.go:141] libmachine: found compatible host: buildroot
	I0815 17:31:53.397494   32399 main.go:141] libmachine: Provisioning with buildroot...
	I0815 17:31:53.397508   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetMachineName
	I0815 17:31:53.397759   32399 buildroot.go:166] provisioning hostname "ha-683878-m03"
	I0815 17:31:53.397785   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetMachineName
	I0815 17:31:53.397957   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.400696   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.401105   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.401135   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.401295   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:53.401479   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.401639   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.401789   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:53.401954   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:31:53.402119   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0815 17:31:53.402135   32399 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683878-m03 && echo "ha-683878-m03" | sudo tee /etc/hostname
	I0815 17:31:53.518924   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683878-m03
	
	I0815 17:31:53.518949   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.521720   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.522053   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.522074   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.522242   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:53.522435   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.522619   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.522759   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:53.522909   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:31:53.523077   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0815 17:31:53.523099   32399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683878-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683878-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683878-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:31:53.633947   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:31:53.633976   32399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 17:31:53.633995   32399 buildroot.go:174] setting up certificates
	I0815 17:31:53.634007   32399 provision.go:84] configureAuth start
	I0815 17:31:53.634020   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetMachineName
	I0815 17:31:53.634315   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:31:53.636975   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.637357   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.637386   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.637487   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.639565   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.640038   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.640061   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.640234   32399 provision.go:143] copyHostCerts
	I0815 17:31:53.640261   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:31:53.640297   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 17:31:53.640309   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:31:53.640387   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 17:31:53.640520   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:31:53.640554   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 17:31:53.640560   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:31:53.640588   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 17:31:53.640648   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:31:53.640669   32399 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 17:31:53.640678   32399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:31:53.640712   32399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 17:31:53.640776   32399 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.ha-683878-m03 san=[127.0.0.1 192.168.39.102 ha-683878-m03 localhost minikube]
	I0815 17:31:53.750181   32399 provision.go:177] copyRemoteCerts
	I0815 17:31:53.750238   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:31:53.750261   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.752842   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.753275   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.753304   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.753444   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:53.753617   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.753740   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:53.753856   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:31:53.834774   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 17:31:53.834875   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 17:31:53.859383   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 17:31:53.859457   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 17:31:53.885113   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 17:31:53.885196   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 17:31:53.909108   32399 provision.go:87] duration metric: took 275.089302ms to configureAuth
	I0815 17:31:53.909132   32399 buildroot.go:189] setting minikube options for container-runtime
	I0815 17:31:53.909347   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:31:53.909436   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:53.912274   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.912683   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:53.912709   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:53.912871   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:53.913055   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.913203   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:53.913334   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:53.913469   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:31:53.913616   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0815 17:31:53.913631   32399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:31:54.173348   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:31:54.173375   32399 main.go:141] libmachine: Checking connection to Docker...
	I0815 17:31:54.173385   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetURL
	I0815 17:31:54.174751   32399 main.go:141] libmachine: (ha-683878-m03) DBG | Using libvirt version 6000000
	I0815 17:31:54.176993   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.177277   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.177303   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.177538   32399 main.go:141] libmachine: Docker is up and running!
	I0815 17:31:54.177554   32399 main.go:141] libmachine: Reticulating splines...
	I0815 17:31:54.177561   32399 client.go:171] duration metric: took 25.768881471s to LocalClient.Create
	I0815 17:31:54.177582   32399 start.go:167] duration metric: took 25.768939477s to libmachine.API.Create "ha-683878"
	I0815 17:31:54.177593   32399 start.go:293] postStartSetup for "ha-683878-m03" (driver="kvm2")
	I0815 17:31:54.177606   32399 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:31:54.177624   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:54.177846   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:31:54.177868   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:54.180005   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.180380   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.180408   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.180562   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:54.180722   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:54.180883   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:54.181027   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:31:54.258943   32399 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:31:54.263288   32399 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 17:31:54.263313   32399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 17:31:54.263385   32399 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 17:31:54.263498   32399 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 17:31:54.263509   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /etc/ssl/certs/202192.pem
	I0815 17:31:54.263613   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:31:54.272991   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:31:54.297358   32399 start.go:296] duration metric: took 119.753559ms for postStartSetup
	I0815 17:31:54.297418   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetConfigRaw
	I0815 17:31:54.297961   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:31:54.300667   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.301051   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.301088   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.301327   32399 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:31:54.301514   32399 start.go:128] duration metric: took 25.911150347s to createHost
	I0815 17:31:54.301539   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:54.303671   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.304033   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.304061   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.304193   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:54.304352   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:54.304570   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:54.304720   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:54.304925   32399 main.go:141] libmachine: Using SSH client type: native
	I0815 17:31:54.305111   32399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.102 22 <nil> <nil>}
	I0815 17:31:54.305126   32399 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 17:31:54.405817   32399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723743114.383134289
	
	I0815 17:31:54.405839   32399 fix.go:216] guest clock: 1723743114.383134289
	I0815 17:31:54.405849   32399 fix.go:229] Guest: 2024-08-15 17:31:54.383134289 +0000 UTC Remote: 2024-08-15 17:31:54.30152525 +0000 UTC m=+199.534419910 (delta=81.609039ms)
	I0815 17:31:54.405867   32399 fix.go:200] guest clock delta is within tolerance: 81.609039ms
	I0815 17:31:54.405873   32399 start.go:83] releasing machines lock for "ha-683878-m03", held for 26.015614375s
	I0815 17:31:54.405902   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:54.406141   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:31:54.408440   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.408787   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.408820   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.410739   32399 out.go:177] * Found network options:
	I0815 17:31:54.411976   32399 out.go:177]   - NO_PROXY=192.168.39.17,192.168.39.232
	W0815 17:31:54.413078   32399 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 17:31:54.413103   32399 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 17:31:54.413132   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:54.413584   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:54.413723   32399 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:31:54.413829   32399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:31:54.413866   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	W0815 17:31:54.413943   32399 proxy.go:119] fail to check proxy env: Error ip not in block
	W0815 17:31:54.413971   32399 proxy.go:119] fail to check proxy env: Error ip not in block
	I0815 17:31:54.414031   32399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:31:54.414051   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:31:54.416376   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.416579   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.416776   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.416803   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.416945   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:54.416966   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:54.416989   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:54.417100   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:54.417164   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:31:54.417261   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:54.417335   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:31:54.417403   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:31:54.417439   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:31:54.417556   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:31:54.643699   32399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 17:31:54.649737   32399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 17:31:54.649805   32399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:31:54.669695   32399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 17:31:54.669719   32399 start.go:495] detecting cgroup driver to use...
	I0815 17:31:54.669781   32399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:31:54.689200   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:31:54.705716   32399 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:31:54.705767   32399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:31:54.721518   32399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:31:54.737193   32399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:31:54.878133   32399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:31:55.043938   32399 docker.go:233] disabling docker service ...
	I0815 17:31:55.044009   32399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:31:55.057741   32399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:31:55.070816   32399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:31:55.190566   32399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:31:55.301710   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:31:55.314980   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:31:55.333061   32399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:31:55.333158   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.343340   32399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:31:55.343408   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.353288   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.363495   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.374357   32399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:31:55.384672   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.394992   32399 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.412506   32399 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:31:55.422696   32399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:31:55.432113   32399 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 17:31:55.432161   32399 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 17:31:55.444560   32399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:31:55.453428   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:31:55.596823   32399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:31:55.735933   32399 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:31:55.736005   32399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:31:55.740912   32399 start.go:563] Will wait 60s for crictl version
	I0815 17:31:55.740966   32399 ssh_runner.go:195] Run: which crictl
	I0815 17:31:55.744555   32399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:31:55.781290   32399 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 17:31:55.781354   32399 ssh_runner.go:195] Run: crio --version
	I0815 17:31:55.808725   32399 ssh_runner.go:195] Run: crio --version
	I0815 17:31:55.837300   32399 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 17:31:55.838873   32399 out.go:177]   - env NO_PROXY=192.168.39.17
	I0815 17:31:55.840255   32399 out.go:177]   - env NO_PROXY=192.168.39.17,192.168.39.232
	I0815 17:31:55.841437   32399 main.go:141] libmachine: (ha-683878-m03) Calling .GetIP
	I0815 17:31:55.844175   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:55.844551   32399 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:31:55.844574   32399 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:31:55.844808   32399 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 17:31:55.848978   32399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:31:55.861180   32399 mustload.go:65] Loading cluster: ha-683878
	I0815 17:31:55.861433   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:31:55.861784   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:31:55.861826   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:31:55.876124   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42861
	I0815 17:31:55.876509   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:31:55.876942   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:31:55.876959   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:31:55.877267   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:31:55.877438   32399 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:31:55.879049   32399 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:31:55.879368   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:31:55.879402   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:31:55.895207   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0815 17:31:55.895642   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:31:55.896119   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:31:55.896144   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:31:55.896465   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:31:55.896631   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:31:55.896784   32399 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878 for IP: 192.168.39.102
	I0815 17:31:55.896800   32399 certs.go:194] generating shared ca certs ...
	I0815 17:31:55.896817   32399 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:31:55.896930   32399 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 17:31:55.896964   32399 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 17:31:55.896973   32399 certs.go:256] generating profile certs ...
	I0815 17:31:55.897039   32399 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key
	I0815 17:31:55.897062   32399 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.79bf3ced
	I0815 17:31:55.897075   32399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.79bf3ced with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.17 192.168.39.232 192.168.39.102 192.168.39.254]
	I0815 17:31:55.960572   32399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.79bf3ced ...
	I0815 17:31:55.960600   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.79bf3ced: {Name:mk99fa0b5f620c685341a21e4bc78e62e9b202fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:31:55.960752   32399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.79bf3ced ...
	I0815 17:31:55.960763   32399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.79bf3ced: {Name:mk311eb5add21f571a8af06cc429c9bc098bb06b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:31:55.960834   32399 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.79bf3ced -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt
	I0815 17:31:55.960954   32399 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.79bf3ced -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key
	I0815 17:31:55.961094   32399 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key
	I0815 17:31:55.961108   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 17:31:55.961126   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 17:31:55.961140   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 17:31:55.961158   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 17:31:55.961170   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 17:31:55.961183   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 17:31:55.961194   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 17:31:55.961205   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 17:31:55.961256   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 17:31:55.961284   32399 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 17:31:55.961293   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 17:31:55.961317   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 17:31:55.961337   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:31:55.961357   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 17:31:55.961398   32399 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:31:55.961429   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /usr/share/ca-certificates/202192.pem
	I0815 17:31:55.961447   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:31:55.961459   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem -> /usr/share/ca-certificates/20219.pem
	I0815 17:31:55.961487   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:31:55.964187   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:31:55.964579   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:31:55.964605   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:31:55.964783   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:31:55.964979   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:31:55.965114   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:31:55.965284   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:31:56.036845   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0815 17:31:56.041500   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0815 17:31:56.052745   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0815 17:31:56.056704   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0815 17:31:56.067039   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0815 17:31:56.071383   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0815 17:31:56.087715   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0815 17:31:56.092010   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0815 17:31:56.109688   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0815 17:31:56.115532   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0815 17:31:56.128033   32399 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0815 17:31:56.132432   32399 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0815 17:31:56.145389   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:31:56.174587   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:31:56.207751   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:31:56.235127   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 17:31:56.261869   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0815 17:31:56.286224   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 17:31:56.310536   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:31:56.333198   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:31:56.356534   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 17:31:56.380651   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:31:56.403739   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 17:31:56.426359   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0815 17:31:56.443642   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0815 17:31:56.459514   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0815 17:31:56.476982   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0815 17:31:56.493955   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0815 17:31:56.509776   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0815 17:31:56.525475   32399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0815 17:31:56.541462   32399 ssh_runner.go:195] Run: openssl version
	I0815 17:31:56.546979   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 17:31:56.556842   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 17:31:56.561162   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 17:31:56.561207   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 17:31:56.566795   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:31:56.577387   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:31:56.587983   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:31:56.592183   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:31:56.592226   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:31:56.598278   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:31:56.608642   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 17:31:56.619243   32399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 17:31:56.623720   32399 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 17:31:56.623768   32399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 17:31:56.629307   32399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 17:31:56.639663   32399 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:31:56.643786   32399 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 17:31:56.643841   32399 kubeadm.go:934] updating node {m03 192.168.39.102 8443 v1.31.0 crio true true} ...
	I0815 17:31:56.643923   32399 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683878-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.102
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:31:56.643948   32399 kube-vip.go:115] generating kube-vip config ...
	I0815 17:31:56.643973   32399 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 17:31:56.658907   32399 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 17:31:56.658960   32399 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 17:31:56.658997   32399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:31:56.669211   32399 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.0': No such file or directory
	
	Initiating transfer...
	I0815 17:31:56.669251   32399 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.0
	I0815 17:31:56.678795   32399 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubectl.sha256
	I0815 17:31:56.678795   32399 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubelet.sha256
	I0815 17:31:56.678822   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubectl -> /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 17:31:56.678858   32399 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/amd64/kubeadm.sha256
	I0815 17:31:56.678879   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubeadm -> /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 17:31:56.678910   32399 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl
	I0815 17:31:56.678946   32399 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm
	I0815 17:31:56.678879   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:31:56.688360   32399 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubeadm': No such file or directory
	I0815 17:31:56.688385   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubeadm --> /var/lib/minikube/binaries/v1.31.0/kubeadm (58290328 bytes)
	I0815 17:31:56.688776   32399 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubectl': No such file or directory
	I0815 17:31:56.688791   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubectl --> /var/lib/minikube/binaries/v1.31.0/kubectl (56381592 bytes)
	I0815 17:31:56.702959   32399 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubelet -> /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 17:31:56.703073   32399 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet
	I0815 17:31:56.806479   32399 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.0/kubelet': No such file or directory
	I0815 17:31:56.806520   32399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.31.0/kubelet --> /var/lib/minikube/binaries/v1.31.0/kubelet (76865848 bytes)
	I0815 17:31:57.547743   32399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0815 17:31:57.558405   32399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0815 17:31:57.575793   32399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:31:57.593950   32399 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 17:31:57.610985   32399 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 17:31:57.614996   32399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:31:57.627994   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:31:57.761190   32399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:31:57.778062   32399 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:31:57.778508   32399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:31:57.778553   32399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:31:57.793848   32399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41177
	I0815 17:31:57.794348   32399 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:31:57.794859   32399 main.go:141] libmachine: Using API Version  1
	I0815 17:31:57.794879   32399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:31:57.795374   32399 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:31:57.795570   32399 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:31:57.795722   32399 start.go:317] joinCluster: &{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cluster
Name:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:31:57.795841   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0815 17:31:57.795863   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:31:57.799015   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:31:57.799408   32399 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:31:57.799447   32399 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:31:57.799520   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:31:57.799702   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:31:57.799865   32399 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:31:57.799975   32399 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:31:57.949722   32399 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:31:57.949774   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01d3mu.y2f8jenobaipuomd --discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683878-m03 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443"
	I0815 17:32:21.061931   32399 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 01d3mu.y2f8jenobaipuomd --discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-683878-m03 --control-plane --apiserver-advertise-address=192.168.39.102 --apiserver-bind-port=8443": (23.112127181s)
	I0815 17:32:21.061975   32399 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0815 17:32:21.557613   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-683878-m03 minikube.k8s.io/updated_at=2024_08_15T17_32_21_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=ha-683878 minikube.k8s.io/primary=false
	I0815 17:32:21.681719   32399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-683878-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0815 17:32:21.805600   32399 start.go:319] duration metric: took 24.009873883s to joinCluster
	I0815 17:32:21.805670   32399 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 17:32:21.806123   32399 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:32:21.807232   32399 out.go:177] * Verifying Kubernetes components...
	I0815 17:32:21.808598   32399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:32:22.076040   32399 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:32:22.173020   32399 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:32:22.173238   32399 kapi.go:59] client config for ha-683878: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.crt", KeyFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key", CAFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0815 17:32:22.173293   32399 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.17:8443
	I0815 17:32:22.173499   32399 node_ready.go:35] waiting up to 6m0s for node "ha-683878-m03" to be "Ready" ...
	I0815 17:32:22.173577   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:22.173584   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:22.173592   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:22.173597   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:22.176897   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:22.673973   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:22.673996   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:22.674007   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:22.674012   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:22.677618   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:23.174264   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:23.174291   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:23.174302   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:23.174306   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:23.177438   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:23.674740   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:23.674766   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:23.674778   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:23.674784   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:23.682038   32399 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 17:32:24.173830   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:24.173852   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:24.173860   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:24.173864   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:24.177131   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:24.177861   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:24.673796   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:24.673819   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:24.673827   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:24.673831   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:24.677195   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:25.174368   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:25.174387   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:25.174396   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:25.174400   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:25.183660   32399 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0815 17:32:25.673777   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:25.673796   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:25.673804   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:25.673807   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:25.677326   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:26.173858   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:26.173879   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:26.173887   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:26.173892   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:26.177125   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:26.674647   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:26.674669   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:26.674680   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:26.674685   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:26.677932   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:26.678662   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:27.174495   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:27.174516   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:27.174524   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:27.174528   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:27.177834   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:27.673799   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:27.673819   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:27.673827   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:27.673830   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:27.676845   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:28.174280   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:28.174302   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:28.174309   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:28.174312   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:28.177826   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:28.673925   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:28.673952   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:28.673964   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:28.673970   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:28.677210   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:29.173938   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:29.173957   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:29.173965   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:29.173971   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:29.177003   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:29.177664   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:29.674034   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:29.674060   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:29.674072   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:29.674077   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:29.676942   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:30.174001   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:30.174028   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:30.174042   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:30.174047   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:30.177454   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:30.674403   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:30.674429   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:30.674437   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:30.674441   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:30.677462   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:31.174102   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:31.174126   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:31.174135   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:31.174138   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:31.176996   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:31.177822   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:31.674178   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:31.674205   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:31.674216   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:31.674222   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:31.677546   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:32.174085   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:32.174112   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:32.174123   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:32.174129   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:32.177640   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:32.674633   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:32.674659   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:32.674672   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:32.674678   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:32.678233   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:33.174528   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:33.174553   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:33.174562   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:33.174569   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:33.177783   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:33.178261   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:33.674150   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:33.674173   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:33.674183   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:33.674188   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:33.677506   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:34.174304   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:34.174326   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:34.174332   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:34.174337   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:34.177467   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:34.674560   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:34.674582   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:34.674590   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:34.674596   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:34.678094   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:35.174310   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:35.174335   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:35.174345   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:35.174350   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:35.177605   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:35.674579   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:35.674598   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:35.674607   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:35.674611   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:35.678147   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:35.678994   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:36.174231   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:36.174252   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:36.174260   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:36.174264   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:36.177589   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:36.674574   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:36.674596   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:36.674604   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:36.674609   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:36.678004   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:37.174347   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:37.174370   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:37.174381   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:37.174388   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:37.177647   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:37.673743   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:37.673764   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:37.673772   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:37.673777   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:37.676724   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:38.174338   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:38.174360   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:38.174368   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:38.174372   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:38.178127   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:38.178726   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:38.674525   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:38.674548   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:38.674559   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:38.674566   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:38.677778   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:39.174657   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:39.174683   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:39.174696   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:39.174703   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:39.177931   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:39.674704   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:39.674729   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:39.674741   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:39.674748   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:39.678143   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:40.174418   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:40.174440   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:40.174448   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:40.174452   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:40.177712   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:40.674640   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:40.674661   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:40.674670   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:40.674674   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:40.677879   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:40.678783   32399 node_ready.go:53] node "ha-683878-m03" has status "Ready":"False"
	I0815 17:32:41.174250   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:41.174270   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.174278   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.174283   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.177452   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:41.178159   32399 node_ready.go:49] node "ha-683878-m03" has status "Ready":"True"
	I0815 17:32:41.178180   32399 node_ready.go:38] duration metric: took 19.00466153s for node "ha-683878-m03" to be "Ready" ...
	I0815 17:32:41.178191   32399 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:32:41.178269   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:32:41.178282   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.178291   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.178295   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.185480   32399 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0815 17:32:41.194636   32399 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-c5mlj" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.194737   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-c5mlj
	I0815 17:32:41.194748   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.194760   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.194773   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.200799   32399 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0815 17:32:41.201737   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:41.201751   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.201759   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.201762   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.204079   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.204643   32399 pod_ready.go:93] pod "coredns-6f6b679f8f-c5mlj" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:41.204667   32399 pod_ready.go:82] duration metric: took 10.00508ms for pod "coredns-6f6b679f8f-c5mlj" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.204681   32399 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-kfczp" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.204747   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/coredns-6f6b679f8f-kfczp
	I0815 17:32:41.204758   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.204767   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.204778   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.207460   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.208013   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:41.208027   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.208033   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.208037   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.210374   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.210862   32399 pod_ready.go:93] pod "coredns-6f6b679f8f-kfczp" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:41.210876   32399 pod_ready.go:82] duration metric: took 6.18734ms for pod "coredns-6f6b679f8f-kfczp" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.210885   32399 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.210930   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683878
	I0815 17:32:41.210939   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.210948   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.210956   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.213116   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.213720   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:41.213733   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.213740   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.213743   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.216319   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.217149   32399 pod_ready.go:93] pod "etcd-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:41.217165   32399 pod_ready.go:82] duration metric: took 6.274422ms for pod "etcd-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.217173   32399 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.217219   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683878-m02
	I0815 17:32:41.217226   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.217233   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.217238   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.219588   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.220341   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:41.220357   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.220367   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.220372   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.222638   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.223154   32399 pod_ready.go:93] pod "etcd-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:41.223172   32399 pod_ready.go:82] duration metric: took 5.990647ms for pod "etcd-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.223183   32399 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.374524   32399 request.go:632] Waited for 151.285572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683878-m03
	I0815 17:32:41.374582   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/etcd-ha-683878-m03
	I0815 17:32:41.374587   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.374594   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.374599   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.377348   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:41.574281   32399 request.go:632] Waited for 196.280265ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:41.574338   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:41.574343   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.574350   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.574354   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.577446   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:41.577906   32399 pod_ready.go:93] pod "etcd-ha-683878-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:41.577921   32399 pod_ready.go:82] duration metric: took 354.73017ms for pod "etcd-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.577938   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.775204   32399 request.go:632] Waited for 197.209512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878
	I0815 17:32:41.775274   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878
	I0815 17:32:41.775281   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.775288   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.775295   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.778946   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:41.975063   32399 request.go:632] Waited for 195.363549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:41.975132   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:41.975143   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:41.975155   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:41.975164   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:41.978691   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:41.979300   32399 pod_ready.go:93] pod "kube-apiserver-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:41.979316   32399 pod_ready.go:82] duration metric: took 401.371948ms for pod "kube-apiserver-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:41.979325   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:42.174789   32399 request.go:632] Waited for 195.405615ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878-m02
	I0815 17:32:42.174852   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878-m02
	I0815 17:32:42.174857   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:42.174864   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:42.174868   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:42.178064   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:42.375253   32399 request.go:632] Waited for 196.415171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:42.375320   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:42.375330   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:42.375341   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:42.375345   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:42.378731   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:42.379201   32399 pod_ready.go:93] pod "kube-apiserver-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:42.379220   32399 pod_ready.go:82] duration metric: took 399.888478ms for pod "kube-apiserver-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:42.379232   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:42.574296   32399 request.go:632] Waited for 194.992186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878-m03
	I0815 17:32:42.574362   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-683878-m03
	I0815 17:32:42.574367   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:42.574374   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:42.574378   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:42.578084   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:42.775187   32399 request.go:632] Waited for 196.347179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:42.775235   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:42.775244   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:42.775252   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:42.775257   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:42.778291   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:42.778885   32399 pod_ready.go:93] pod "kube-apiserver-ha-683878-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:42.778903   32399 pod_ready.go:82] duration metric: took 399.66364ms for pod "kube-apiserver-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:42.778912   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:42.974310   32399 request.go:632] Waited for 195.305249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878
	I0815 17:32:42.974369   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878
	I0815 17:32:42.974377   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:42.974388   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:42.974395   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:42.977810   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:43.174987   32399 request.go:632] Waited for 196.373013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:43.175054   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:43.175060   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:43.175067   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:43.175071   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:43.177986   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:43.178590   32399 pod_ready.go:93] pod "kube-controller-manager-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:43.178608   32399 pod_ready.go:82] duration metric: took 399.690127ms for pod "kube-controller-manager-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:43.178618   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:43.374684   32399 request.go:632] Waited for 195.978252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878-m02
	I0815 17:32:43.374733   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878-m02
	I0815 17:32:43.374738   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:43.374746   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:43.374750   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:43.378406   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:43.574421   32399 request.go:632] Waited for 195.312861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:43.574472   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:43.574477   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:43.574486   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:43.574491   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:43.577649   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:43.578447   32399 pod_ready.go:93] pod "kube-controller-manager-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:43.578470   32399 pod_ready.go:82] duration metric: took 399.832761ms for pod "kube-controller-manager-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:43.578486   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:43.774568   32399 request.go:632] Waited for 196.016436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878-m03
	I0815 17:32:43.774649   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-683878-m03
	I0815 17:32:43.774656   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:43.774664   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:43.774669   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:43.778046   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:43.974626   32399 request.go:632] Waited for 195.821317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:43.974693   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:43.974698   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:43.974705   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:43.974710   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:43.978245   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:43.978820   32399 pod_ready.go:93] pod "kube-controller-manager-ha-683878-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:43.978848   32399 pod_ready.go:82] duration metric: took 400.353646ms for pod "kube-controller-manager-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:43.978863   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-89p4v" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:44.174830   32399 request.go:632] Waited for 195.889616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89p4v
	I0815 17:32:44.174914   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-89p4v
	I0815 17:32:44.174925   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:44.174933   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:44.174939   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:44.178234   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:44.375251   32399 request.go:632] Waited for 196.352467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:44.375309   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:44.375314   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:44.375321   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:44.375325   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:44.378310   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:44.379019   32399 pod_ready.go:93] pod "kube-proxy-89p4v" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:44.379040   32399 pod_ready.go:82] duration metric: took 400.166256ms for pod "kube-proxy-89p4v" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:44.379052   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8bp98" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:44.575166   32399 request.go:632] Waited for 196.047647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bp98
	I0815 17:32:44.575235   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8bp98
	I0815 17:32:44.575243   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:44.575253   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:44.575262   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:44.578454   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:44.774652   32399 request.go:632] Waited for 195.35787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:44.774707   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:44.774712   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:44.774720   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:44.774723   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:44.777575   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:44.778134   32399 pod_ready.go:93] pod "kube-proxy-8bp98" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:44.778152   32399 pod_ready.go:82] duration metric: took 399.092736ms for pod "kube-proxy-8bp98" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:44.778162   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s9hw4" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:44.974958   32399 request.go:632] Waited for 196.713091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9hw4
	I0815 17:32:44.975028   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9hw4
	I0815 17:32:44.975035   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:44.975045   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:44.975054   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:44.978400   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:45.174573   32399 request.go:632] Waited for 195.222828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:45.174689   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:45.174704   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:45.174714   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:45.174721   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:45.178336   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:45.178954   32399 pod_ready.go:93] pod "kube-proxy-s9hw4" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:45.178980   32399 pod_ready.go:82] duration metric: took 400.811627ms for pod "kube-proxy-s9hw4" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:45.178995   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:45.375048   32399 request.go:632] Waited for 195.962331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878
	I0815 17:32:45.375123   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878
	I0815 17:32:45.375128   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:45.375136   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:45.375140   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:45.378524   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:45.574451   32399 request.go:632] Waited for 195.265569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:45.574519   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878
	I0815 17:32:45.574524   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:45.574531   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:45.574536   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:45.577566   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:45.578090   32399 pod_ready.go:93] pod "kube-scheduler-ha-683878" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:45.578107   32399 pod_ready.go:82] duration metric: took 399.104498ms for pod "kube-scheduler-ha-683878" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:45.578119   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:45.775273   32399 request.go:632] Waited for 197.08497ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878-m02
	I0815 17:32:45.775354   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878-m02
	I0815 17:32:45.775361   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:45.775368   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:45.775376   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:45.778426   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:45.974866   32399 request.go:632] Waited for 195.970601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:45.974917   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m02
	I0815 17:32:45.974923   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:45.974930   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:45.974941   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:45.977926   32399 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0815 17:32:45.978390   32399 pod_ready.go:93] pod "kube-scheduler-ha-683878-m02" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:45.978407   32399 pod_ready.go:82] duration metric: took 400.28082ms for pod "kube-scheduler-ha-683878-m02" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:45.978417   32399 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:46.174534   32399 request.go:632] Waited for 196.052755ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878-m03
	I0815 17:32:46.174627   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-683878-m03
	I0815 17:32:46.174639   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:46.174650   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:46.174658   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:46.177715   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:46.374808   32399 request.go:632] Waited for 196.339932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:46.374863   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes/ha-683878-m03
	I0815 17:32:46.374870   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:46.374878   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:46.374888   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:46.378435   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:46.379191   32399 pod_ready.go:93] pod "kube-scheduler-ha-683878-m03" in "kube-system" namespace has status "Ready":"True"
	I0815 17:32:46.379206   32399 pod_ready.go:82] duration metric: took 400.783564ms for pod "kube-scheduler-ha-683878-m03" in "kube-system" namespace to be "Ready" ...
	I0815 17:32:46.379215   32399 pod_ready.go:39] duration metric: took 5.201008555s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:32:46.379231   32399 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:32:46.379286   32399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:32:46.398250   32399 api_server.go:72] duration metric: took 24.592549351s to wait for apiserver process to appear ...
	I0815 17:32:46.398276   32399 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:32:46.398297   32399 api_server.go:253] Checking apiserver healthz at https://192.168.39.17:8443/healthz ...
	I0815 17:32:46.405902   32399 api_server.go:279] https://192.168.39.17:8443/healthz returned 200:
	ok
	I0815 17:32:46.405978   32399 round_trippers.go:463] GET https://192.168.39.17:8443/version
	I0815 17:32:46.405988   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:46.406001   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:46.406012   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:46.406873   32399 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0815 17:32:46.406947   32399 api_server.go:141] control plane version: v1.31.0
	I0815 17:32:46.406960   32399 api_server.go:131] duration metric: took 8.676545ms to wait for apiserver health ...
	I0815 17:32:46.406971   32399 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 17:32:46.575323   32399 request.go:632] Waited for 168.273095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:32:46.575399   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:32:46.575407   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:46.575416   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:46.575422   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:46.591852   32399 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0815 17:32:46.599397   32399 system_pods.go:59] 24 kube-system pods found
	I0815 17:32:46.599422   32399 system_pods.go:61] "coredns-6f6b679f8f-c5mlj" [24146559-ea1d-42db-9f61-730ed436dea8] Running
	I0815 17:32:46.599427   32399 system_pods.go:61] "coredns-6f6b679f8f-kfczp" [5d18cfeb-ccfe-4432-b999-510d84438c7a] Running
	I0815 17:32:46.599430   32399 system_pods.go:61] "etcd-ha-683878" [89164a36-1867-4d3e-8b16-4b6e3f5735d9] Running
	I0815 17:32:46.599434   32399 system_pods.go:61] "etcd-ha-683878-m02" [ffd47718-50f2-42b0-8759-390d981a69b8] Running
	I0815 17:32:46.599437   32399 system_pods.go:61] "etcd-ha-683878-m03" [0d49fecb-c4ae-4f81-94e3-1042caeb1d6e] Running
	I0815 17:32:46.599441   32399 system_pods.go:61] "kindnet-6bccr" [43768eb8-6f4d-443f-afd5-af43e96556a1] Running
	I0815 17:32:46.599446   32399 system_pods.go:61] "kindnet-g8lqf" [bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e] Running
	I0815 17:32:46.599451   32399 system_pods.go:61] "kindnet-z5z9h" [525522f9-4aef-49ae-9f3f-02960fe82bff] Running
	I0815 17:32:46.599455   32399 system_pods.go:61] "kube-apiserver-ha-683878" [265e1832-cd30-4ba1-9aa5-5e18cd71e8f0] Running
	I0815 17:32:46.599460   32399 system_pods.go:61] "kube-apiserver-ha-683878-m02" [bff6c9d5-5c64-4220-9a17-f3f08b8e5dab] Running
	I0815 17:32:46.599469   32399 system_pods.go:61] "kube-apiserver-ha-683878-m03" [a39a5463-47e0-4a1e-bad5-dca1544c5a3a] Running
	I0815 17:32:46.599474   32399 system_pods.go:61] "kube-controller-manager-ha-683878" [e958c9a5-cf23-4d1a-bf25-ab03393607cb] Running
	I0815 17:32:46.599479   32399 system_pods.go:61] "kube-controller-manager-ha-683878-m02" [fa5ae940-8a2a-4a4c-950c-5fe267cddc2d] Running
	I0815 17:32:46.599487   32399 system_pods.go:61] "kube-controller-manager-ha-683878-m03" [9352fe4c-bc08-4fc3-b001-e34c7b434253] Running
	I0815 17:32:46.599493   32399 system_pods.go:61] "kube-proxy-89p4v" [58c774bf-7b9a-46ad-8d85-81df9b68415a] Running
	I0815 17:32:46.599500   32399 system_pods.go:61] "kube-proxy-8bp98" [009b24bb-3d29-4ba6-b18f-0694f7479636] Running
	I0815 17:32:46.599504   32399 system_pods.go:61] "kube-proxy-s9hw4" [f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1] Running
	I0815 17:32:46.599510   32399 system_pods.go:61] "kube-scheduler-ha-683878" [fe51d20e-6174-48c9-b170-2eff952a4975] Running
	I0815 17:32:46.599513   32399 system_pods.go:61] "kube-scheduler-ha-683878-m02" [bb94ccf5-231f-4bb5-903d-8664be14bc58] Running
	I0815 17:32:46.599519   32399 system_pods.go:61] "kube-scheduler-ha-683878-m03" [1738390e-8c78-48b7-b2cd-3beb5df2cbeb] Running
	I0815 17:32:46.599522   32399 system_pods.go:61] "kube-vip-ha-683878" [9c4a5acc-022d-4756-a0c4-6a867b22f0bb] Running
	I0815 17:32:46.599525   32399 system_pods.go:61] "kube-vip-ha-683878-m02" [041e7349-ab7d-4b80-9f0d-ea92f61d637b] Running
	I0815 17:32:46.599528   32399 system_pods.go:61] "kube-vip-ha-683878-m03" [4092675a-3aac-4e04-b507-c5434f0e3f1c] Running
	I0815 17:32:46.599531   32399 system_pods.go:61] "storage-provisioner" [78d884cc-a5c3-4f94-b643-b6593cb3f622] Running
	I0815 17:32:46.599537   32399 system_pods.go:74] duration metric: took 192.559759ms to wait for pod list to return data ...
	I0815 17:32:46.599547   32399 default_sa.go:34] waiting for default service account to be created ...
	I0815 17:32:46.774966   32399 request.go:632] Waited for 175.342628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/default/serviceaccounts
	I0815 17:32:46.775030   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/default/serviceaccounts
	I0815 17:32:46.775038   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:46.775049   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:46.775060   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:46.779252   32399 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0815 17:32:46.779365   32399 default_sa.go:45] found service account: "default"
	I0815 17:32:46.779379   32399 default_sa.go:55] duration metric: took 179.826969ms for default service account to be created ...
	I0815 17:32:46.779387   32399 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 17:32:46.974726   32399 request.go:632] Waited for 195.258635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:32:46.974801   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/namespaces/kube-system/pods
	I0815 17:32:46.974807   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:46.974816   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:46.974824   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:46.980532   32399 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0815 17:32:46.987356   32399 system_pods.go:86] 24 kube-system pods found
	I0815 17:32:46.987387   32399 system_pods.go:89] "coredns-6f6b679f8f-c5mlj" [24146559-ea1d-42db-9f61-730ed436dea8] Running
	I0815 17:32:46.987392   32399 system_pods.go:89] "coredns-6f6b679f8f-kfczp" [5d18cfeb-ccfe-4432-b999-510d84438c7a] Running
	I0815 17:32:46.987397   32399 system_pods.go:89] "etcd-ha-683878" [89164a36-1867-4d3e-8b16-4b6e3f5735d9] Running
	I0815 17:32:46.987401   32399 system_pods.go:89] "etcd-ha-683878-m02" [ffd47718-50f2-42b0-8759-390d981a69b8] Running
	I0815 17:32:46.987405   32399 system_pods.go:89] "etcd-ha-683878-m03" [0d49fecb-c4ae-4f81-94e3-1042caeb1d6e] Running
	I0815 17:32:46.987408   32399 system_pods.go:89] "kindnet-6bccr" [43768eb8-6f4d-443f-afd5-af43e96556a1] Running
	I0815 17:32:46.987412   32399 system_pods.go:89] "kindnet-g8lqf" [bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e] Running
	I0815 17:32:46.987415   32399 system_pods.go:89] "kindnet-z5z9h" [525522f9-4aef-49ae-9f3f-02960fe82bff] Running
	I0815 17:32:46.987419   32399 system_pods.go:89] "kube-apiserver-ha-683878" [265e1832-cd30-4ba1-9aa5-5e18cd71e8f0] Running
	I0815 17:32:46.987422   32399 system_pods.go:89] "kube-apiserver-ha-683878-m02" [bff6c9d5-5c64-4220-9a17-f3f08b8e5dab] Running
	I0815 17:32:46.987425   32399 system_pods.go:89] "kube-apiserver-ha-683878-m03" [a39a5463-47e0-4a1e-bad5-dca1544c5a3a] Running
	I0815 17:32:46.987430   32399 system_pods.go:89] "kube-controller-manager-ha-683878" [e958c9a5-cf23-4d1a-bf25-ab03393607cb] Running
	I0815 17:32:46.987435   32399 system_pods.go:89] "kube-controller-manager-ha-683878-m02" [fa5ae940-8a2a-4a4c-950c-5fe267cddc2d] Running
	I0815 17:32:46.987438   32399 system_pods.go:89] "kube-controller-manager-ha-683878-m03" [9352fe4c-bc08-4fc3-b001-e34c7b434253] Running
	I0815 17:32:46.987441   32399 system_pods.go:89] "kube-proxy-89p4v" [58c774bf-7b9a-46ad-8d85-81df9b68415a] Running
	I0815 17:32:46.987446   32399 system_pods.go:89] "kube-proxy-8bp98" [009b24bb-3d29-4ba6-b18f-0694f7479636] Running
	I0815 17:32:46.987449   32399 system_pods.go:89] "kube-proxy-s9hw4" [f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1] Running
	I0815 17:32:46.987453   32399 system_pods.go:89] "kube-scheduler-ha-683878" [fe51d20e-6174-48c9-b170-2eff952a4975] Running
	I0815 17:32:46.987456   32399 system_pods.go:89] "kube-scheduler-ha-683878-m02" [bb94ccf5-231f-4bb5-903d-8664be14bc58] Running
	I0815 17:32:46.987459   32399 system_pods.go:89] "kube-scheduler-ha-683878-m03" [1738390e-8c78-48b7-b2cd-3beb5df2cbeb] Running
	I0815 17:32:46.987463   32399 system_pods.go:89] "kube-vip-ha-683878" [9c4a5acc-022d-4756-a0c4-6a867b22f0bb] Running
	I0815 17:32:46.987466   32399 system_pods.go:89] "kube-vip-ha-683878-m02" [041e7349-ab7d-4b80-9f0d-ea92f61d637b] Running
	I0815 17:32:46.987468   32399 system_pods.go:89] "kube-vip-ha-683878-m03" [4092675a-3aac-4e04-b507-c5434f0e3f1c] Running
	I0815 17:32:46.987471   32399 system_pods.go:89] "storage-provisioner" [78d884cc-a5c3-4f94-b643-b6593cb3f622] Running
	I0815 17:32:46.987477   32399 system_pods.go:126] duration metric: took 208.08207ms to wait for k8s-apps to be running ...
	I0815 17:32:46.987487   32399 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 17:32:46.987530   32399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:32:47.010760   32399 system_svc.go:56] duration metric: took 23.262262ms WaitForService to wait for kubelet
	I0815 17:32:47.010792   32399 kubeadm.go:582] duration metric: took 25.205096133s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:32:47.010818   32399 node_conditions.go:102] verifying NodePressure condition ...
	I0815 17:32:47.175223   32399 request.go:632] Waited for 164.325537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.17:8443/api/v1/nodes
	I0815 17:32:47.175289   32399 round_trippers.go:463] GET https://192.168.39.17:8443/api/v1/nodes
	I0815 17:32:47.175294   32399 round_trippers.go:469] Request Headers:
	I0815 17:32:47.175302   32399 round_trippers.go:473]     Accept: application/json, */*
	I0815 17:32:47.175309   32399 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0815 17:32:47.179259   32399 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0815 17:32:47.180358   32399 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 17:32:47.180379   32399 node_conditions.go:123] node cpu capacity is 2
	I0815 17:32:47.180390   32399 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 17:32:47.180396   32399 node_conditions.go:123] node cpu capacity is 2
	I0815 17:32:47.180401   32399 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 17:32:47.180406   32399 node_conditions.go:123] node cpu capacity is 2
	I0815 17:32:47.180412   32399 node_conditions.go:105] duration metric: took 169.587997ms to run NodePressure ...
	I0815 17:32:47.180438   32399 start.go:241] waiting for startup goroutines ...
	I0815 17:32:47.180589   32399 start.go:255] writing updated cluster config ...
	I0815 17:32:47.181028   32399 ssh_runner.go:195] Run: rm -f paused
	I0815 17:32:47.233242   32399 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 17:32:47.236171   32399 out.go:177] * Done! kubectl is now configured to use "ha-683878" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.337243144Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fe2eeec7-149d-4d8a-b830-0afd08e6b7c5 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.338231809Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f7819113-8dd8-441a-b6bb-e96601ffba3c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.338821353Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743448338799586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7819113-8dd8-441a-b6bb-e96601ffba3c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.339276018Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1f2f668-3661-40f7-989f-c702690a1f89 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.339344122Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1f2f668-3661-40f7-989f-c702690a1f89 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.339743002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743172239012531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742979212891907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742978669055170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5b3e71b5c2a125da17643e4a273019b9d35fe1d6d57d95662dbfd5f406ed50,PodSandboxId:a27d06298c6a489e8b47e461c258245103ba3a32f1eec496e93d5eca1370e9bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723742977701324355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723742965431323988,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172374296
1580945625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c95bb7bfbe2c06a349a370026128c0969e39b88ce22dff5a060a42827c947b,PodSandboxId:2b2acfbffd44277eaa71af8c2ebca596d754e239b0a5169463913cf4eff6322f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172374295380
3914568,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9071bde150aa40cadfbb23f44a0dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723742950245715779,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d96eb3cf9f846f9c9ede73f8bbf8503748f3da80a8f919932ebe179f528d25b,PodSandboxId:89f0c6b43382e374593533db10cc93f3211f40e69e03980de951a09771ddc3ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723742950305057413,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723742950264725617,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6948597165c346c42890f5acaa78b26e33279be966f3dc48009b5d6699203d7,PodSandboxId:6934cfc4e26f2fd47881e56c3e3b6905e63593bfa19ac1ed8e2cf8558587dc0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723742950210072284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1f2f668-3661-40f7-989f-c702690a1f89 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.381956340Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8d79487d-ffab-41f6-9f95-571c61f68ee2 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.382047607Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8d79487d-ffab-41f6-9f95-571c61f68ee2 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.383397395Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=478eb430-2a83-4055-b027-90ec24949180 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.383968370Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743448383944706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=478eb430-2a83-4055-b027-90ec24949180 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.384557928Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=061385fa-3653-4759-8ee9-90c4dcf6c7e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.384626999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=061385fa-3653-4759-8ee9-90c4dcf6c7e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.384850774Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743172239012531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742979212891907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742978669055170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5b3e71b5c2a125da17643e4a273019b9d35fe1d6d57d95662dbfd5f406ed50,PodSandboxId:a27d06298c6a489e8b47e461c258245103ba3a32f1eec496e93d5eca1370e9bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723742977701324355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723742965431323988,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172374296
1580945625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c95bb7bfbe2c06a349a370026128c0969e39b88ce22dff5a060a42827c947b,PodSandboxId:2b2acfbffd44277eaa71af8c2ebca596d754e239b0a5169463913cf4eff6322f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172374295380
3914568,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9071bde150aa40cadfbb23f44a0dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723742950245715779,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d96eb3cf9f846f9c9ede73f8bbf8503748f3da80a8f919932ebe179f528d25b,PodSandboxId:89f0c6b43382e374593533db10cc93f3211f40e69e03980de951a09771ddc3ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723742950305057413,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723742950264725617,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6948597165c346c42890f5acaa78b26e33279be966f3dc48009b5d6699203d7,PodSandboxId:6934cfc4e26f2fd47881e56c3e3b6905e63593bfa19ac1ed8e2cf8558587dc0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723742950210072284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=061385fa-3653-4759-8ee9-90c4dcf6c7e6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.409110631Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f788a111-28f8-4555-b9b9-aa5bd3ea7fc4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.409433167Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-lgsr4,Uid:17ac3df7-c2a0-40b5-b107-ab6a7a0417af,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723743169384422920,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T17:32:48.174187895Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-c5mlj,Uid:24146559-ea1d-42db-9f61-730ed436dea8,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1723742979101787072,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T17:29:37.292406926Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-kfczp,Uid:5d18cfeb-ccfe-4432-b999-510d84438c7a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723742978523014840,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2
024-08-15T17:29:37.316213114Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a27d06298c6a489e8b47e461c258245103ba3a32f1eec496e93d5eca1370e9bf,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:78d884cc-a5c3-4f94-b643-b6593cb3f622,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723742977615642898,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"im
age\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-15T17:29:37.308288987Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&PodSandboxMetadata{Name:kube-proxy-s9hw4,Uid:f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723742961118954940,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-08-15T17:29:20.797390904Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&PodSandboxMetadata{Name:kindnet-g8lqf,Uid:bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723742961110702468,Labels:map[string]string{app: kindnet,controller-revision-hash: 65cbdfc95f,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T17:29:20.792238861Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:89f0c6b43382e374593533db10cc93f3211f40e69e03980de951a09771ddc3ea,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-683878,Uid:1ec6ea2e6b66134608615076611d4422,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1723742950068680838,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1ec6ea2e6b66134608615076611d4422,kubernetes.io/config.seen: 2024-08-15T17:29:09.576413407Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2b2acfbffd44277eaa71af8c2ebca596d754e239b0a5169463913cf4eff6322f,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-683878,Uid:5e9071bde150aa40cadfbb23f44a0dcf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723742950047868016,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9071bde150aa40cadfbb23f44a0dcf,},Annotations:map[string]string{kube
rnetes.io/config.hash: 5e9071bde150aa40cadfbb23f44a0dcf,kubernetes.io/config.seen: 2024-08-15T17:29:09.576415544Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&PodSandboxMetadata{Name:etcd-ha-683878,Uid:589cddf02c2fe63fd30bfcac06f62665,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723742950044039460,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.17:2379,kubernetes.io/config.hash: 589cddf02c2fe63fd30bfcac06f62665,kubernetes.io/config.seen: 2024-08-15T17:29:09.576416751Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6934cfc4e26f2fd47881e56c3e3b6905e63593bfa19ac1ed8e2cf8558587dc0d,Metadata:&PodS
andboxMetadata{Name:kube-apiserver-ha-683878,Uid:851d14d5b04b12dccb38d8220a38dbf7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723742950031893400,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.17:8443,kubernetes.io/config.hash: 851d14d5b04b12dccb38d8220a38dbf7,kubernetes.io/config.seen: 2024-08-15T17:29:09.576409648Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-683878,Uid:a39f7390d1bf7da73874e9af0a17b36c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723742950024032417,Labels:map[string]string{component: kube-scheduler,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a39f7390d1bf7da73874e9af0a17b36c,kubernetes.io/config.seen: 2024-08-15T17:29:09.576414609Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f788a111-28f8-4555-b9b9-aa5bd3ea7fc4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.410110802Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=325184e6-8254-43d1-8049-c18cc8a22f9e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.410165604Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=325184e6-8254-43d1-8049-c18cc8a22f9e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.410398158Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743172239012531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742979212891907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742978669055170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5b3e71b5c2a125da17643e4a273019b9d35fe1d6d57d95662dbfd5f406ed50,PodSandboxId:a27d06298c6a489e8b47e461c258245103ba3a32f1eec496e93d5eca1370e9bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723742977701324355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723742965431323988,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172374296
1580945625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c95bb7bfbe2c06a349a370026128c0969e39b88ce22dff5a060a42827c947b,PodSandboxId:2b2acfbffd44277eaa71af8c2ebca596d754e239b0a5169463913cf4eff6322f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172374295380
3914568,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9071bde150aa40cadfbb23f44a0dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723742950245715779,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d96eb3cf9f846f9c9ede73f8bbf8503748f3da80a8f919932ebe179f528d25b,PodSandboxId:89f0c6b43382e374593533db10cc93f3211f40e69e03980de951a09771ddc3ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723742950305057413,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723742950264725617,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6948597165c346c42890f5acaa78b26e33279be966f3dc48009b5d6699203d7,PodSandboxId:6934cfc4e26f2fd47881e56c3e3b6905e63593bfa19ac1ed8e2cf8558587dc0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723742950210072284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=325184e6-8254-43d1-8049-c18cc8a22f9e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.428427712Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89adea9b-880a-4a9e-8119-722013b551bf name=/runtime.v1.RuntimeService/Version
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.428590889Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89adea9b-880a-4a9e-8119-722013b551bf name=/runtime.v1.RuntimeService/Version
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.430966760Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed9edde3-d88a-48ca-af83-549a00cfb1e3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.431563516Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743448431540803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed9edde3-d88a-48ca-af83-549a00cfb1e3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.432155433Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=873d1450-f028-4fb7-ae1c-1beab46fda28 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.432255950Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=873d1450-f028-4fb7-ae1c-1beab46fda28 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:37:28 ha-683878 crio[682]: time="2024-08-15 17:37:28.432535688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743172239012531,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742979212891907,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723742978669055170,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d5b3e71b5c2a125da17643e4a273019b9d35fe1d6d57d95662dbfd5f406ed50,PodSandboxId:a27d06298c6a489e8b47e461c258245103ba3a32f1eec496e93d5eca1370e9bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1723742977701324355,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CO
NTAINER_RUNNING,CreatedAt:1723742965431323988,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:172374296
1580945625,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6c95bb7bfbe2c06a349a370026128c0969e39b88ce22dff5a060a42827c947b,PodSandboxId:2b2acfbffd44277eaa71af8c2ebca596d754e239b0a5169463913cf4eff6322f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:172374295380
3914568,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e9071bde150aa40cadfbb23f44a0dcf,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723742950245715779,Labels:map[string]string{
io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d96eb3cf9f846f9c9ede73f8bbf8503748f3da80a8f919932ebe179f528d25b,PodSandboxId:89f0c6b43382e374593533db10cc93f3211f40e69e03980de951a09771ddc3ea,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723742950305057413,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723742950264725617,Labels:map[string]string{io.kubernetes.container.n
ame: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6948597165c346c42890f5acaa78b26e33279be966f3dc48009b5d6699203d7,PodSandboxId:6934cfc4e26f2fd47881e56c3e3b6905e63593bfa19ac1ed8e2cf8558587dc0d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723742950210072284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.nam
e: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=873d1450-f028-4fb7-ae1c-1beab46fda28 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c22e0c68e353d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   a48e946a0189a       busybox-7dff88458-lgsr4
	e2d856610b1da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   96be386135521       coredns-6f6b679f8f-c5mlj
	f085f1327c68a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   d330a801db93b       coredns-6f6b679f8f-kfczp
	8d5b3e71b5c2a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   a27d06298c6a4       storage-provisioner
	78d6dea2ba166       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago       Running             kindnet-cni               0                   64e069f270f02       kindnet-g8lqf
	ea81ebf55447c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago       Running             kube-proxy                0                   209398e9569b4       kube-proxy-s9hw4
	b6c95bb7bfbe2       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     8 minutes ago       Running             kube-vip                  0                   2b2acfbffd442       kube-vip-ha-683878
	4d96eb3cf9f84       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago       Running             kube-controller-manager   0                   89f0c6b43382e       kube-controller-manager-ha-683878
	08adcf281be8a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago       Running             etcd                      0                   b48feabdeccee       etcd-ha-683878
	d9b5d872cbe2c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago       Running             kube-scheduler            0                   a0ca28e1760aa       kube-scheduler-ha-683878
	c6948597165c3       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago       Running             kube-apiserver            0                   6934cfc4e26f2       kube-apiserver-ha-683878
	
	
	==> coredns [e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e] <==
	[INFO] 10.244.2.2:55769 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000128336s
	[INFO] 10.244.2.2:42789 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000091004s
	[INFO] 10.244.1.2:33661 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00022092s
	[INFO] 10.244.0.4:37543 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001586797s
	[INFO] 10.244.0.4:39767 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147698s
	[INFO] 10.244.0.4:56644 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00111781s
	[INFO] 10.244.0.4:57862 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081256s
	[INFO] 10.244.2.2:39974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001814889s
	[INFO] 10.244.2.2:60048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001073479s
	[INFO] 10.244.2.2:59792 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116437s
	[INFO] 10.244.2.2:60453 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162311s
	[INFO] 10.244.2.2:38063 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074865s
	[INFO] 10.244.1.2:49382 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204795s
	[INFO] 10.244.0.4:49451 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020076s
	[INFO] 10.244.0.4:36025 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090742s
	[INFO] 10.244.1.2:40041 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120543s
	[INFO] 10.244.1.2:44246 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135148s
	[INFO] 10.244.1.2:49551 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109408s
	[INFO] 10.244.0.4:54048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242835s
	[INFO] 10.244.0.4:58043 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114208s
	[INFO] 10.244.0.4:57821 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014893s
	[INFO] 10.244.0.4:60055 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059928s
	[INFO] 10.244.2.2:59967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188473s
	[INFO] 10.244.2.2:46929 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000173466s
	[INFO] 10.244.2.2:40321 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103061s
	
	
	==> coredns [f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b] <==
	[INFO] 10.244.1.2:47364 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000151565s
	[INFO] 10.244.1.2:55344 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.016525491s
	[INFO] 10.244.1.2:57120 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172831s
	[INFO] 10.244.1.2:55849 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.014643038s
	[INFO] 10.244.1.2:47083 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161478s
	[INFO] 10.244.1.2:45144 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142497s
	[INFO] 10.244.1.2:41019 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147233s
	[INFO] 10.244.0.4:50547 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154587s
	[INFO] 10.244.0.4:60786 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018138s
	[INFO] 10.244.0.4:51598 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011869s
	[INFO] 10.244.0.4:59583 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005686s
	[INFO] 10.244.2.2:47444 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121752s
	[INFO] 10.244.2.2:46973 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092024s
	[INFO] 10.244.2.2:42492 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092653s
	[INFO] 10.244.1.2:38440 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00026281s
	[INFO] 10.244.1.2:50999 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076764s
	[INFO] 10.244.1.2:46163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107061s
	[INFO] 10.244.0.4:36567 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099261s
	[INFO] 10.244.0.4:51415 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079336s
	[INFO] 10.244.2.2:33646 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132168s
	[INFO] 10.244.2.2:41707 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123477s
	[INFO] 10.244.2.2:46838 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090831s
	[INFO] 10.244.2.2:46347 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071615s
	[INFO] 10.244.1.2:58233 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222961s
	[INFO] 10.244.2.2:37537 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108341s
	
	
	==> describe nodes <==
	Name:               ha-683878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T17_29_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:29:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:37:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:33:21 +0000   Thu, 15 Aug 2024 17:29:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:33:21 +0000   Thu, 15 Aug 2024 17:29:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:33:21 +0000   Thu, 15 Aug 2024 17:29:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:33:21 +0000   Thu, 15 Aug 2024 17:29:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-683878
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fae4a08d40d64f788bfe5305cfe9e22b
	  System UUID:                fae4a08d-40d6-4f78-8bfe-5305cfe9e22b
	  Boot ID:                    a20b912d-dbbf-42f1-bb62-642f6b4f28ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lgsr4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 coredns-6f6b679f8f-c5mlj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m7s
	  kube-system                 coredns-6f6b679f8f-kfczp             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m7s
	  kube-system                 etcd-ha-683878                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m12s
	  kube-system                 kindnet-g8lqf                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m8s
	  kube-system                 kube-apiserver-ha-683878             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-controller-manager-ha-683878    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-proxy-s9hw4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-scheduler-ha-683878             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-vip-ha-683878                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m6s   kube-proxy       
	  Normal  Starting                 8m12s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m12s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m12s  kubelet          Node ha-683878 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m12s  kubelet          Node ha-683878 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m12s  kubelet          Node ha-683878 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m8s   node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Normal  NodeReady                7m51s  kubelet          Node ha-683878 status is now: NodeReady
	  Normal  RegisteredNode           6m17s  node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Normal  RegisteredNode           5m2s   node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	
	
	Name:               ha-683878-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T17_31_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:31:02 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:33:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 17:33:05 +0000   Thu, 15 Aug 2024 17:34:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 17:33:05 +0000   Thu, 15 Aug 2024 17:34:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 17:33:05 +0000   Thu, 15 Aug 2024 17:34:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 17:33:05 +0000   Thu, 15 Aug 2024 17:34:37 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-683878-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f7afa772a5e433884c57e372a6611cf
	  System UUID:                8f7afa77-2a5e-4338-84c5-7e372a6611cf
	  Boot ID:                    7d53cde9-9e38-44a8-99a7-cf7f6e592677
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j8h8r                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 etcd-ha-683878-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m24s
	  kube-system                 kindnet-z5z9h                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m26s
	  kube-system                 kube-apiserver-ha-683878-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m24s
	  kube-system                 kube-controller-manager-ha-683878-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-proxy-89p4v                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-scheduler-ha-683878-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-vip-ha-683878-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m21s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m26s (x8 over 6m26s)  kubelet          Node ha-683878-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m26s (x8 over 6m26s)  kubelet          Node ha-683878-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m26s (x7 over 6m26s)  kubelet          Node ha-683878-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m23s                  node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  RegisteredNode           6m17s                  node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  RegisteredNode           5m2s                   node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  NodeNotReady             2m51s                  node-controller  Node ha-683878-m02 status is now: NodeNotReady
	
	
	Name:               ha-683878-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T17_32_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:32:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:37:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:33:19 +0000   Thu, 15 Aug 2024 17:32:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:33:19 +0000   Thu, 15 Aug 2024 17:32:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:33:19 +0000   Thu, 15 Aug 2024 17:32:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:33:19 +0000   Thu, 15 Aug 2024 17:32:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-683878-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2955de94b234fe7b9772686648cfdec
	  System UUID:                e2955de9-4b23-4fe7-b977-2686648cfdec
	  Boot ID:                    59d90ed6-06e9-4243-bf30-f7876e81cc8e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-sk47b                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 etcd-ha-683878-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m8s
	  kube-system                 kindnet-6bccr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m10s
	  kube-system                 kube-apiserver-ha-683878-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-controller-manager-ha-683878-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-proxy-8bp98                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-scheduler-ha-683878-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-vip-ha-683878-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m10s (x8 over 5m10s)  kubelet          Node ha-683878-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m10s (x8 over 5m10s)  kubelet          Node ha-683878-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m10s (x7 over 5m10s)  kubelet          Node ha-683878-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-683878-m03 event: Registered Node ha-683878-m03 in Controller
	  Normal  RegisteredNode           5m7s                   node-controller  Node ha-683878-m03 event: Registered Node ha-683878-m03 in Controller
	  Normal  RegisteredNode           5m2s                   node-controller  Node ha-683878-m03 event: Registered Node ha-683878-m03 in Controller
	
	
	Name:               ha-683878-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T17_33_27_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:33:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:37:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:33:57 +0000   Thu, 15 Aug 2024 17:33:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:33:57 +0000   Thu, 15 Aug 2024 17:33:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:33:57 +0000   Thu, 15 Aug 2024 17:33:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:33:57 +0000   Thu, 15 Aug 2024 17:33:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-683878-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a40a481bcbcc4fd6871392be97e352cc
	  System UUID:                a40a481b-cbcc-4fd6-8713-92be97e352cc
	  Boot ID:                    79dd6bf7-1c68-4e72-a539-a47e9aa8429f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hmfn7       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m2s
	  kube-system                 kube-proxy-8clcw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m58s                kube-proxy       
	  Normal  RegisteredNode           4m2s                 node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal  NodeHasSufficientMemory  4m2s (x2 over 4m2s)  kubelet          Node ha-683878-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m2s (x2 over 4m2s)  kubelet          Node ha-683878-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m2s (x2 over 4m2s)  kubelet          Node ha-683878-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m58s                node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal  RegisteredNode           3m57s                node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal  NodeReady                3m42s                kubelet          Node ha-683878-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Aug15 17:28] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050086] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039163] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.758056] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.450594] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.804535] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.632720] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.064329] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054606] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[Aug15 17:29] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.110126] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.269301] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.960612] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.119022] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.056299] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.075028] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.095571] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.103797] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.010085] kauditd_printk_skb: 34 callbacks suppressed
	[ +22.762994] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf] <==
	{"level":"warn","ts":"2024-08-15T17:37:28.653499Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.683639Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.687542Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.695397Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.701090Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.709113Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.715621Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.722252Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.726367Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.729729Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.737409Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.742818Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.748266Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.750893Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.752966Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.753956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.763872Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.767242Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.770814Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.777706Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.782154Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.785572Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.789572Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.794848Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-08-15T17:37:28.806538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"2212c0bfe49c9415","from":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 17:37:28 up 8 min,  0 users,  load average: 0.23, 0.25, 0.15
	Linux ha-683878 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480] <==
	I0815 17:36:56.704929       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:37:06.705834       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:37:06.705935       1 main.go:299] handling current node
	I0815 17:37:06.705953       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:37:06.705958       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:37:06.706116       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:37:06.706146       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:37:06.706209       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:37:06.706229       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:37:16.711356       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:37:16.711400       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:37:16.711603       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:37:16.711612       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:37:16.711666       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:37:16.711686       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:37:16.711731       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:37:16.711752       1 main.go:299] handling current node
	I0815 17:37:26.703964       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:37:26.703994       1 main.go:299] handling current node
	I0815 17:37:26.704922       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:37:26.704947       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:37:26.705159       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:37:26.705167       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:37:26.705237       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:37:26.705242       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c6948597165c346c42890f5acaa78b26e33279be966f3dc48009b5d6699203d7] <==
	I0815 17:29:15.151418       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0815 17:29:15.161727       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.17]
	I0815 17:29:15.162918       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 17:29:15.167306       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 17:29:15.382755       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 17:29:16.548678       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 17:29:16.570988       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0815 17:29:16.587491       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 17:29:20.734008       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0815 17:29:20.994540       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0815 17:32:53.838350       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52120: use of closed network connection
	E0815 17:32:54.014807       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52146: use of closed network connection
	E0815 17:32:54.200944       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52170: use of closed network connection
	E0815 17:32:54.419875       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52196: use of closed network connection
	E0815 17:32:54.597858       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52218: use of closed network connection
	E0815 17:32:54.768077       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52236: use of closed network connection
	E0815 17:32:54.955849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52248: use of closed network connection
	E0815 17:32:55.157786       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52268: use of closed network connection
	E0815 17:32:55.343844       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52274: use of closed network connection
	E0815 17:32:55.635258       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52304: use of closed network connection
	E0815 17:32:55.805339       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36266: use of closed network connection
	E0815 17:32:55.987801       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36294: use of closed network connection
	E0815 17:32:56.172253       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36320: use of closed network connection
	E0815 17:32:56.343147       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36346: use of closed network connection
	E0815 17:32:56.514633       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:36362: use of closed network connection
	
	
	==> kube-controller-manager [4d96eb3cf9f846f9c9ede73f8bbf8503748f3da80a8f919932ebe179f528d25b] <==
	I0815 17:33:26.556699       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-683878-m04" podCIDRs=["10.244.3.0/24"]
	I0815 17:33:26.556761       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:26.556791       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:26.574347       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:26.874723       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:27.061766       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:27.337917       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:30.264213       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:30.265572       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-683878-m04"
	I0815 17:33:30.285394       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:31.205384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:31.228108       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:36.778763       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:46.285024       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-683878-m04"
	I0815 17:33:46.285110       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:46.301417       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:46.987205       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:33:57.171607       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:34:37.011165       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m02"
	I0815 17:34:37.011927       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-683878-m04"
	I0815 17:34:37.034567       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m02"
	I0815 17:34:37.044252       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.81322ms"
	I0815 17:34:37.044658       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="89.158µs"
	I0815 17:34:40.354378       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m02"
	I0815 17:34:42.280070       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m02"
	
	
	==> kube-proxy [ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 17:29:21.913244       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 17:29:21.929928       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.17"]
	E0815 17:29:21.930207       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:29:21.968539       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 17:29:21.968623       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 17:29:21.968663       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:29:21.971250       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:29:21.971675       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:29:21.971868       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:29:21.973356       1 config.go:197] "Starting service config controller"
	I0815 17:29:21.973423       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:29:21.973573       1 config.go:326] "Starting node config controller"
	I0815 17:29:21.973598       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:29:21.973540       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:29:21.973728       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:29:22.074190       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 17:29:22.074267       1 shared_informer.go:320] Caches are synced for service config
	I0815 17:29:22.074282       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f] <==
	I0815 17:32:18.642595       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-6bccr" node="ha-683878-m03"
	E0815 17:32:18.666218       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-8bp98\": pod kube-proxy-8bp98 is already assigned to node \"ha-683878-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-8bp98" node="ha-683878-m03"
	E0815 17:32:18.668346       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 009b24bb-3d29-4ba6-b18f-0694f7479636(kube-system/kube-proxy-8bp98) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-8bp98"
	E0815 17:32:18.668396       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-8bp98\": pod kube-proxy-8bp98 is already assigned to node \"ha-683878-m03\"" pod="kube-system/kube-proxy-8bp98"
	I0815 17:32:18.668418       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-8bp98" node="ha-683878-m03"
	E0815 17:32:48.135118       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-j8h8r\": pod busybox-7dff88458-j8h8r is already assigned to node \"ha-683878-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-j8h8r" node="ha-683878-m02"
	E0815 17:32:48.135707       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 6b5e6835-6da3-4460-97b8-8155d7edb3c4(default/busybox-7dff88458-j8h8r) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-j8h8r"
	E0815 17:32:48.136091       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-j8h8r\": pod busybox-7dff88458-j8h8r is already assigned to node \"ha-683878-m02\"" pod="default/busybox-7dff88458-j8h8r"
	I0815 17:32:48.136393       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-j8h8r" node="ha-683878-m02"
	E0815 17:32:48.191220       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lgsr4\": pod busybox-7dff88458-lgsr4 is already assigned to node \"ha-683878\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lgsr4" node="ha-683878"
	E0815 17:32:48.191414       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-sk47b\": pod busybox-7dff88458-sk47b is already assigned to node \"ha-683878-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-sk47b" node="ha-683878-m03"
	E0815 17:32:48.191554       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 0cc66ed5-a981-4fe1-8128-f12c914a8c45(default/busybox-7dff88458-sk47b) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-sk47b"
	E0815 17:32:48.191574       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-sk47b\": pod busybox-7dff88458-sk47b is already assigned to node \"ha-683878-m03\"" pod="default/busybox-7dff88458-sk47b"
	I0815 17:32:48.191598       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-sk47b" node="ha-683878-m03"
	E0815 17:32:48.191389       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 17ac3df7-c2a0-40b5-b107-ab6a7a0417af(default/busybox-7dff88458-lgsr4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-lgsr4"
	E0815 17:32:48.191771       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lgsr4\": pod busybox-7dff88458-lgsr4 is already assigned to node \"ha-683878\"" pod="default/busybox-7dff88458-lgsr4"
	I0815 17:32:48.191899       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lgsr4" node="ha-683878"
	E0815 17:33:26.612943       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dzspw\": pod kube-proxy-dzspw is already assigned to node \"ha-683878-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dzspw" node="ha-683878-m04"
	E0815 17:33:26.613188       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod eb8dfa16-0d1d-4ff8-8692-4268881e44c8(kube-system/kube-proxy-dzspw) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-dzspw"
	E0815 17:33:26.613271       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dzspw\": pod kube-proxy-dzspw is already assigned to node \"ha-683878-m04\"" pod="kube-system/kube-proxy-dzspw"
	I0815 17:33:26.613349       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dzspw" node="ha-683878-m04"
	E0815 17:33:26.634591       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hmfn7\": pod kindnet-hmfn7 is already assigned to node \"ha-683878-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hmfn7" node="ha-683878-m04"
	E0815 17:33:26.637167       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e58e4f5f-3ee5-4fa8-87c8-6caf24492efa(kube-system/kindnet-hmfn7) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-hmfn7"
	E0815 17:33:26.637925       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hmfn7\": pod kindnet-hmfn7 is already assigned to node \"ha-683878-m04\"" pod="kube-system/kindnet-hmfn7"
	I0815 17:33:26.638049       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hmfn7" node="ha-683878-m04"
	
	
	==> kubelet <==
	Aug 15 17:36:16 ha-683878 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 17:36:16 ha-683878 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 17:36:16 ha-683878 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 17:36:16 ha-683878 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 17:36:16 ha-683878 kubelet[1316]: E0815 17:36:16.641194    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743376640918209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:16 ha-683878 kubelet[1316]: E0815 17:36:16.641235    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743376640918209,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:26 ha-683878 kubelet[1316]: E0815 17:36:26.643141    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743386642854804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:26 ha-683878 kubelet[1316]: E0815 17:36:26.643205    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743386642854804,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:36 ha-683878 kubelet[1316]: E0815 17:36:36.645199    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743396644917790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:36 ha-683878 kubelet[1316]: E0815 17:36:36.645529    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743396644917790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:46 ha-683878 kubelet[1316]: E0815 17:36:46.647206    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743406646873753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:46 ha-683878 kubelet[1316]: E0815 17:36:46.647645    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743406646873753,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:56 ha-683878 kubelet[1316]: E0815 17:36:56.649834    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743416649434074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:36:56 ha-683878 kubelet[1316]: E0815 17:36:56.650087    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743416649434074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:37:06 ha-683878 kubelet[1316]: E0815 17:37:06.651521    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743426650920207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:37:06 ha-683878 kubelet[1316]: E0815 17:37:06.651665    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743426650920207,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:37:16 ha-683878 kubelet[1316]: E0815 17:37:16.493078    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 17:37:16 ha-683878 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 17:37:16 ha-683878 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 17:37:16 ha-683878 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 17:37:16 ha-683878 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 17:37:16 ha-683878 kubelet[1316]: E0815 17:37:16.655303    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743436654775363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:37:16 ha-683878 kubelet[1316]: E0815 17:37:16.655347    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743436654775363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:37:26 ha-683878 kubelet[1316]: E0815 17:37:26.657376    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743446657085297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:37:26 ha-683878 kubelet[1316]: E0815 17:37:26.657416    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743446657085297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-683878 -n ha-683878
helpers_test.go:261: (dbg) Run:  kubectl --context ha-683878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (58.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (380.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-683878 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-683878 -v=7 --alsologtostderr
E0815 17:37:47.733438   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:38:15.436377   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-683878 -v=7 --alsologtostderr: exit status 82 (2m1.823645052s)

                                                
                                                
-- stdout --
	* Stopping node "ha-683878-m04"  ...
	* Stopping node "ha-683878-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:37:30.243863   38398 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:37:30.243992   38398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:37:30.244001   38398 out.go:358] Setting ErrFile to fd 2...
	I0815 17:37:30.244005   38398 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:37:30.244204   38398 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:37:30.244463   38398 out.go:352] Setting JSON to false
	I0815 17:37:30.244572   38398 mustload.go:65] Loading cluster: ha-683878
	I0815 17:37:30.244924   38398 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:37:30.245057   38398 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:37:30.245253   38398 mustload.go:65] Loading cluster: ha-683878
	I0815 17:37:30.245480   38398 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:37:30.245537   38398 stop.go:39] StopHost: ha-683878-m04
	I0815 17:37:30.245989   38398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:30.246038   38398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:30.263683   38398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39759
	I0815 17:37:30.264177   38398 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:30.264763   38398 main.go:141] libmachine: Using API Version  1
	I0815 17:37:30.264789   38398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:30.265099   38398 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:30.267503   38398 out.go:177] * Stopping node "ha-683878-m04"  ...
	I0815 17:37:30.268619   38398 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 17:37:30.268642   38398 main.go:141] libmachine: (ha-683878-m04) Calling .DriverName
	I0815 17:37:30.268834   38398 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 17:37:30.268857   38398 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHHostname
	I0815 17:37:30.271601   38398 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:30.271986   38398 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:33:11 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:37:30.272022   38398 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:37:30.272201   38398 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHPort
	I0815 17:37:30.272343   38398 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHKeyPath
	I0815 17:37:30.272507   38398 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHUsername
	I0815 17:37:30.272635   38398 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m04/id_rsa Username:docker}
	I0815 17:37:30.359386   38398 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 17:37:30.412654   38398 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 17:37:30.467297   38398 main.go:141] libmachine: Stopping "ha-683878-m04"...
	I0815 17:37:30.467342   38398 main.go:141] libmachine: (ha-683878-m04) Calling .GetState
	I0815 17:37:30.468790   38398 main.go:141] libmachine: (ha-683878-m04) Calling .Stop
	I0815 17:37:30.472032   38398 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 0/120
	I0815 17:37:31.619855   38398 main.go:141] libmachine: (ha-683878-m04) Calling .GetState
	I0815 17:37:31.621389   38398 main.go:141] libmachine: Machine "ha-683878-m04" was stopped.
	I0815 17:37:31.621407   38398 stop.go:75] duration metric: took 1.352790556s to stop
	I0815 17:37:31.621425   38398 stop.go:39] StopHost: ha-683878-m03
	I0815 17:37:31.621691   38398 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:37:31.621732   38398 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:37:31.635933   38398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34401
	I0815 17:37:31.636402   38398 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:37:31.636842   38398 main.go:141] libmachine: Using API Version  1
	I0815 17:37:31.636862   38398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:37:31.637203   38398 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:37:31.639113   38398 out.go:177] * Stopping node "ha-683878-m03"  ...
	I0815 17:37:31.640440   38398 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 17:37:31.640470   38398 main.go:141] libmachine: (ha-683878-m03) Calling .DriverName
	I0815 17:37:31.640689   38398 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 17:37:31.640708   38398 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHHostname
	I0815 17:37:31.643515   38398 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:31.643965   38398 main.go:141] libmachine: (ha-683878-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3c:07:a9", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:31:43 +0000 UTC Type:0 Mac:52:54:00:3c:07:a9 Iaid: IPaddr:192.168.39.102 Prefix:24 Hostname:ha-683878-m03 Clientid:01:52:54:00:3c:07:a9}
	I0815 17:37:31.644014   38398 main.go:141] libmachine: (ha-683878-m03) DBG | domain ha-683878-m03 has defined IP address 192.168.39.102 and MAC address 52:54:00:3c:07:a9 in network mk-ha-683878
	I0815 17:37:31.644143   38398 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHPort
	I0815 17:37:31.644300   38398 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHKeyPath
	I0815 17:37:31.644494   38398 main.go:141] libmachine: (ha-683878-m03) Calling .GetSSHUsername
	I0815 17:37:31.644638   38398 sshutil.go:53] new ssh client: &{IP:192.168.39.102 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m03/id_rsa Username:docker}
	I0815 17:37:31.728973   38398 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 17:37:31.783270   38398 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 17:37:31.838594   38398 main.go:141] libmachine: Stopping "ha-683878-m03"...
	I0815 17:37:31.838627   38398 main.go:141] libmachine: (ha-683878-m03) Calling .GetState
	I0815 17:37:31.840138   38398 main.go:141] libmachine: (ha-683878-m03) Calling .Stop
	I0815 17:37:31.843666   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 0/120
	I0815 17:37:32.845075   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 1/120
	I0815 17:37:33.846314   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 2/120
	I0815 17:37:34.847573   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 3/120
	I0815 17:37:35.849297   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 4/120
	I0815 17:37:36.850905   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 5/120
	I0815 17:37:37.852817   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 6/120
	I0815 17:37:38.854068   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 7/120
	I0815 17:37:39.855621   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 8/120
	I0815 17:37:40.856876   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 9/120
	I0815 17:37:41.858725   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 10/120
	I0815 17:37:42.860386   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 11/120
	I0815 17:37:43.861770   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 12/120
	I0815 17:37:44.863210   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 13/120
	I0815 17:37:45.864878   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 14/120
	I0815 17:37:46.866546   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 15/120
	I0815 17:37:47.867991   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 16/120
	I0815 17:37:48.869384   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 17/120
	I0815 17:37:49.871048   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 18/120
	I0815 17:37:50.872622   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 19/120
	I0815 17:37:51.874398   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 20/120
	I0815 17:37:52.875797   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 21/120
	I0815 17:37:53.877216   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 22/120
	I0815 17:37:54.878780   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 23/120
	I0815 17:37:55.880068   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 24/120
	I0815 17:37:56.881840   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 25/120
	I0815 17:37:57.883138   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 26/120
	I0815 17:37:58.884498   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 27/120
	I0815 17:37:59.885818   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 28/120
	I0815 17:38:00.887225   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 29/120
	I0815 17:38:01.889445   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 30/120
	I0815 17:38:02.891035   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 31/120
	I0815 17:38:03.892456   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 32/120
	I0815 17:38:04.893917   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 33/120
	I0815 17:38:05.895292   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 34/120
	I0815 17:38:06.897659   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 35/120
	I0815 17:38:07.898964   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 36/120
	I0815 17:38:08.900384   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 37/120
	I0815 17:38:09.901712   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 38/120
	I0815 17:38:10.902914   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 39/120
	I0815 17:38:11.904634   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 40/120
	I0815 17:38:12.906007   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 41/120
	I0815 17:38:13.907256   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 42/120
	I0815 17:38:14.908606   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 43/120
	I0815 17:38:15.909927   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 44/120
	I0815 17:38:16.911549   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 45/120
	I0815 17:38:17.912880   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 46/120
	I0815 17:38:18.914177   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 47/120
	I0815 17:38:19.915547   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 48/120
	I0815 17:38:20.916807   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 49/120
	I0815 17:38:21.918538   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 50/120
	I0815 17:38:22.919869   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 51/120
	I0815 17:38:23.921625   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 52/120
	I0815 17:38:24.922995   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 53/120
	I0815 17:38:25.924453   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 54/120
	I0815 17:38:26.926597   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 55/120
	I0815 17:38:27.928077   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 56/120
	I0815 17:38:28.929434   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 57/120
	I0815 17:38:29.930852   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 58/120
	I0815 17:38:30.932129   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 59/120
	I0815 17:38:31.933555   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 60/120
	I0815 17:38:32.934941   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 61/120
	I0815 17:38:33.936227   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 62/120
	I0815 17:38:34.937595   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 63/120
	I0815 17:38:35.938876   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 64/120
	I0815 17:38:36.940270   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 65/120
	I0815 17:38:37.941747   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 66/120
	I0815 17:38:38.943217   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 67/120
	I0815 17:38:39.944524   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 68/120
	I0815 17:38:40.945725   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 69/120
	I0815 17:38:41.947089   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 70/120
	I0815 17:38:42.948411   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 71/120
	I0815 17:38:43.949621   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 72/120
	I0815 17:38:44.950811   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 73/120
	I0815 17:38:45.951929   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 74/120
	I0815 17:38:46.953422   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 75/120
	I0815 17:38:47.954713   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 76/120
	I0815 17:38:48.956113   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 77/120
	I0815 17:38:49.957565   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 78/120
	I0815 17:38:50.958829   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 79/120
	I0815 17:38:51.960448   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 80/120
	I0815 17:38:52.962236   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 81/120
	I0815 17:38:53.963639   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 82/120
	I0815 17:38:54.965028   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 83/120
	I0815 17:38:55.966397   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 84/120
	I0815 17:38:56.968077   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 85/120
	I0815 17:38:57.969480   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 86/120
	I0815 17:38:58.970694   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 87/120
	I0815 17:38:59.971890   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 88/120
	I0815 17:39:00.973209   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 89/120
	I0815 17:39:01.974906   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 90/120
	I0815 17:39:02.976238   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 91/120
	I0815 17:39:03.977494   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 92/120
	I0815 17:39:04.978874   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 93/120
	I0815 17:39:05.980364   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 94/120
	I0815 17:39:06.982237   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 95/120
	I0815 17:39:07.983541   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 96/120
	I0815 17:39:08.984973   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 97/120
	I0815 17:39:09.986968   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 98/120
	I0815 17:39:10.988560   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 99/120
	I0815 17:39:11.990016   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 100/120
	I0815 17:39:12.991332   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 101/120
	I0815 17:39:13.992966   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 102/120
	I0815 17:39:14.994295   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 103/120
	I0815 17:39:15.996287   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 104/120
	I0815 17:39:16.997924   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 105/120
	I0815 17:39:17.999536   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 106/120
	I0815 17:39:19.000882   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 107/120
	I0815 17:39:20.002233   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 108/120
	I0815 17:39:21.003416   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 109/120
	I0815 17:39:22.005196   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 110/120
	I0815 17:39:23.006502   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 111/120
	I0815 17:39:24.007856   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 112/120
	I0815 17:39:25.009497   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 113/120
	I0815 17:39:26.010903   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 114/120
	I0815 17:39:27.012470   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 115/120
	I0815 17:39:28.013887   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 116/120
	I0815 17:39:29.015047   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 117/120
	I0815 17:39:30.016354   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 118/120
	I0815 17:39:31.017734   38398 main.go:141] libmachine: (ha-683878-m03) Waiting for machine to stop 119/120
	I0815 17:39:32.018674   38398 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 17:39:32.018744   38398 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0815 17:39:32.020566   38398 out.go:201] 
	W0815 17:39:32.021907   38398 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0815 17:39:32.021921   38398 out.go:270] * 
	* 
	W0815 17:39:32.024092   38398 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:39:32.026110   38398 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-683878 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-683878 --wait=true -v=7 --alsologtostderr
E0815 17:39:52.218750   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:42:47.733931   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-683878 --wait=true -v=7 --alsologtostderr: (4m16.240499905s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-683878
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-683878 -n ha-683878
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-683878 logs -n 25: (2.016303073s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m02:/home/docker/cp-test_ha-683878-m03_ha-683878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m02 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m03_ha-683878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04:/home/docker/cp-test_ha-683878-m03_ha-683878-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m04 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m03_ha-683878-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-683878 cp testdata/cp-test.txt                                                | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3030958127/001/cp-test_ha-683878-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878:/home/docker/cp-test_ha-683878-m04_ha-683878.txt                       |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878 sudo cat                                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m04_ha-683878.txt                                 |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m02:/home/docker/cp-test_ha-683878-m04_ha-683878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m02 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m04_ha-683878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03:/home/docker/cp-test_ha-683878-m04_ha-683878-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m03 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m04_ha-683878-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-683878 node stop m02 -v=7                                                     | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-683878 node start m02 -v=7                                                    | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:36 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-683878 -v=7                                                           | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-683878 -v=7                                                                | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-683878 --wait=true -v=7                                                    | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:39 UTC | 15 Aug 24 17:43 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-683878                                                                | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:43 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:39:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:39:32.069104   38862 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:39:32.069562   38862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:39:32.069587   38862 out.go:358] Setting ErrFile to fd 2...
	I0815 17:39:32.069597   38862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:39:32.070015   38862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:39:32.070791   38862 out.go:352] Setting JSON to false
	I0815 17:39:32.071689   38862 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4918,"bootTime":1723738654,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:39:32.071741   38862 start.go:139] virtualization: kvm guest
	I0815 17:39:32.073756   38862 out.go:177] * [ha-683878] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:39:32.075182   38862 notify.go:220] Checking for updates...
	I0815 17:39:32.075203   38862 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:39:32.076463   38862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:39:32.077562   38862 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:39:32.078796   38862 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:39:32.080211   38862 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:39:32.081550   38862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:39:32.083084   38862 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:39:32.083208   38862 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:39:32.083639   38862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:39:32.083685   38862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:39:32.099179   38862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0815 17:39:32.099621   38862 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:39:32.100084   38862 main.go:141] libmachine: Using API Version  1
	I0815 17:39:32.100106   38862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:39:32.100401   38862 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:39:32.100576   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:39:32.137598   38862 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 17:39:32.138788   38862 start.go:297] selected driver: kvm2
	I0815 17:39:32.138812   38862 start.go:901] validating driver "kvm2" against &{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:39:32.138949   38862 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:39:32.139293   38862 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:39:32.139400   38862 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 17:39:32.154124   38862 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 17:39:32.154785   38862 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:39:32.154839   38862 cni.go:84] Creating CNI manager for ""
	I0815 17:39:32.154851   38862 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 17:39:32.154909   38862 start.go:340] cluster config:
	{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:39:32.155029   38862 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:39:32.156944   38862 out.go:177] * Starting "ha-683878" primary control-plane node in "ha-683878" cluster
	I0815 17:39:32.158384   38862 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:39:32.158410   38862 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:39:32.158415   38862 cache.go:56] Caching tarball of preloaded images
	I0815 17:39:32.158477   38862 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:39:32.158487   38862 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:39:32.158595   38862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:39:32.158797   38862 start.go:360] acquireMachinesLock for ha-683878: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:39:32.158835   38862 start.go:364] duration metric: took 21.151µs to acquireMachinesLock for "ha-683878"
	I0815 17:39:32.158849   38862 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:39:32.158858   38862 fix.go:54] fixHost starting: 
	I0815 17:39:32.159090   38862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:39:32.159117   38862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:39:32.172822   38862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41557
	I0815 17:39:32.173320   38862 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:39:32.173780   38862 main.go:141] libmachine: Using API Version  1
	I0815 17:39:32.173816   38862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:39:32.174122   38862 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:39:32.174338   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:39:32.174484   38862 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:39:32.176017   38862 fix.go:112] recreateIfNeeded on ha-683878: state=Running err=<nil>
	W0815 17:39:32.176049   38862 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:39:32.177867   38862 out.go:177] * Updating the running kvm2 "ha-683878" VM ...
	I0815 17:39:32.179230   38862 machine.go:93] provisionDockerMachine start ...
	I0815 17:39:32.179248   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:39:32.179429   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:39:32.181659   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.182047   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.182070   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.182186   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:39:32.182342   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.182480   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.182594   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:39:32.182786   38862 main.go:141] libmachine: Using SSH client type: native
	I0815 17:39:32.182994   38862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:39:32.183009   38862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 17:39:32.293580   38862 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683878
	
	I0815 17:39:32.293607   38862 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:39:32.293821   38862 buildroot.go:166] provisioning hostname "ha-683878"
	I0815 17:39:32.293849   38862 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:39:32.294039   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:39:32.296541   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.296998   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.297026   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.297183   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:39:32.297349   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.297504   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.297635   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:39:32.297780   38862 main.go:141] libmachine: Using SSH client type: native
	I0815 17:39:32.297926   38862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:39:32.297937   38862 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683878 && echo "ha-683878" | sudo tee /etc/hostname
	I0815 17:39:32.411697   38862 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683878
	
	I0815 17:39:32.411722   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:39:32.414475   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.414970   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.415001   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.415137   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:39:32.415309   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.415483   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.415627   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:39:32.415769   38862 main.go:141] libmachine: Using SSH client type: native
	I0815 17:39:32.415955   38862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:39:32.415978   38862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683878/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:39:32.522360   38862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:39:32.522387   38862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 17:39:32.522415   38862 buildroot.go:174] setting up certificates
	I0815 17:39:32.522426   38862 provision.go:84] configureAuth start
	I0815 17:39:32.522438   38862 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:39:32.522675   38862 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:39:32.525128   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.525490   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.525507   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.525674   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:39:32.527712   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.528019   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.528046   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.528175   38862 provision.go:143] copyHostCerts
	I0815 17:39:32.528207   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:39:32.528245   38862 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 17:39:32.528265   38862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:39:32.528344   38862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 17:39:32.528442   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:39:32.528467   38862 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 17:39:32.528474   38862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:39:32.528530   38862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 17:39:32.528592   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:39:32.528617   38862 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 17:39:32.528624   38862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:39:32.528664   38862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 17:39:32.528774   38862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.ha-683878 san=[127.0.0.1 192.168.39.17 ha-683878 localhost minikube]
	I0815 17:39:32.636345   38862 provision.go:177] copyRemoteCerts
	I0815 17:39:32.636413   38862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:39:32.636441   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:39:32.639099   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.639460   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.639483   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.639665   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:39:32.639810   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.639952   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:39:32.640085   38862 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:39:32.726334   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 17:39:32.726405   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 17:39:32.754539   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 17:39:32.754606   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0815 17:39:32.783780   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 17:39:32.783852   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 17:39:32.811335   38862 provision.go:87] duration metric: took 288.899387ms to configureAuth
	I0815 17:39:32.811359   38862 buildroot.go:189] setting minikube options for container-runtime
	I0815 17:39:32.811576   38862 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:39:32.811662   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:39:32.814396   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.814723   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.814738   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.814972   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:39:32.815132   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.815263   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.815387   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:39:32.815599   38862 main.go:141] libmachine: Using SSH client type: native
	I0815 17:39:32.815796   38862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:39:32.815811   38862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:41:03.775800   38862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:41:03.775827   38862 machine.go:96] duration metric: took 1m31.59658408s to provisionDockerMachine
	I0815 17:41:03.775840   38862 start.go:293] postStartSetup for "ha-683878" (driver="kvm2")
	I0815 17:41:03.775851   38862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:41:03.775867   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:41:03.776176   38862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:41:03.776208   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:41:03.779391   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:03.779889   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:03.779915   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:03.780087   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:41:03.780312   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:41:03.780521   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:41:03.780655   38862 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:41:03.864811   38862 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:41:03.869241   38862 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 17:41:03.869267   38862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 17:41:03.869331   38862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 17:41:03.869426   38862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 17:41:03.869436   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /etc/ssl/certs/202192.pem
	I0815 17:41:03.869525   38862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:41:03.879341   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:41:03.903841   38862 start.go:296] duration metric: took 127.986478ms for postStartSetup
	I0815 17:41:03.903886   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:41:03.904208   38862 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 17:41:03.904237   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:41:03.906970   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:03.907384   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:03.907413   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:03.907575   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:41:03.907732   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:41:03.907861   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:41:03.908025   38862 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	W0815 17:41:03.987297   38862 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0815 17:41:03.987321   38862 fix.go:56] duration metric: took 1m31.828466007s for fixHost
	I0815 17:41:03.987343   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:41:03.990266   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:03.990664   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:03.990706   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:03.990804   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:41:03.991015   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:41:03.991185   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:41:03.991312   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:41:03.991500   38862 main.go:141] libmachine: Using SSH client type: native
	I0815 17:41:03.991696   38862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:41:03.991707   38862 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 17:41:04.121545   38862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723743664.087354690
	
	I0815 17:41:04.121568   38862 fix.go:216] guest clock: 1723743664.087354690
	I0815 17:41:04.121577   38862 fix.go:229] Guest: 2024-08-15 17:41:04.08735469 +0000 UTC Remote: 2024-08-15 17:41:03.987328736 +0000 UTC m=+91.951042500 (delta=100.025954ms)
	I0815 17:41:04.121624   38862 fix.go:200] guest clock delta is within tolerance: 100.025954ms
	I0815 17:41:04.121630   38862 start.go:83] releasing machines lock for "ha-683878", held for 1m31.962786473s
	I0815 17:41:04.121649   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:41:04.121905   38862 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:41:04.124499   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:04.124877   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:04.124901   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:04.125053   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:41:04.125502   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:41:04.125640   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:41:04.125735   38862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:41:04.125764   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:41:04.125896   38862 ssh_runner.go:195] Run: cat /version.json
	I0815 17:41:04.125921   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:41:04.128271   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:04.128564   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:04.128654   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:04.128672   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:04.128847   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:41:04.128948   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:04.128980   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:04.129022   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:41:04.129169   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:41:04.129179   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:41:04.129432   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:41:04.129451   38862 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:41:04.129602   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:41:04.129752   38862 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:41:04.226027   38862 ssh_runner.go:195] Run: systemctl --version
	I0815 17:41:04.232302   38862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:41:04.397720   38862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 17:41:04.407837   38862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 17:41:04.407892   38862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:41:04.417727   38862 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 17:41:04.417748   38862 start.go:495] detecting cgroup driver to use...
	I0815 17:41:04.417825   38862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:41:04.433755   38862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:41:04.447914   38862 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:41:04.447963   38862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:41:04.461662   38862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:41:04.475867   38862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:41:04.621379   38862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:41:04.764995   38862 docker.go:233] disabling docker service ...
	I0815 17:41:04.765069   38862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:41:04.783080   38862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:41:04.797627   38862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:41:04.943292   38862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:41:05.102228   38862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:41:05.116223   38862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:41:05.134362   38862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:41:05.134425   38862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.144888   38862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:41:05.144938   38862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.155308   38862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.165401   38862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.175521   38862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:41:05.186012   38862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.196121   38862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.207182   38862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.217349   38862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:41:05.226638   38862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:41:05.235732   38862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:41:05.378170   38862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:41:06.975320   38862 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.597120855s)
	I0815 17:41:06.975346   38862 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:41:06.975386   38862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:41:06.980951   38862 start.go:563] Will wait 60s for crictl version
	I0815 17:41:06.981009   38862 ssh_runner.go:195] Run: which crictl
	I0815 17:41:06.985245   38862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:41:07.027061   38862 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 17:41:07.027152   38862 ssh_runner.go:195] Run: crio --version
	I0815 17:41:07.058984   38862 ssh_runner.go:195] Run: crio --version
	I0815 17:41:07.087834   38862 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 17:41:07.089529   38862 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:41:07.092155   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:07.092586   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:07.092609   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:07.092812   38862 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 17:41:07.097529   38862 kubeadm.go:883] updating cluster {Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 17:41:07.097647   38862 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:41:07.097688   38862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:41:07.150944   38862 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:41:07.150963   38862 crio.go:433] Images already preloaded, skipping extraction
	I0815 17:41:07.151006   38862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:41:07.191861   38862 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:41:07.191881   38862 cache_images.go:84] Images are preloaded, skipping loading
	I0815 17:41:07.191890   38862 kubeadm.go:934] updating node { 192.168.39.17 8443 v1.31.0 crio true true} ...
	I0815 17:41:07.191991   38862 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:41:07.192058   38862 ssh_runner.go:195] Run: crio config
	I0815 17:41:07.252553   38862 cni.go:84] Creating CNI manager for ""
	I0815 17:41:07.252575   38862 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 17:41:07.252588   38862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 17:41:07.252623   38862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.17 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-683878 NodeName:ha-683878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 17:41:07.252817   38862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-683878"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 17:41:07.252844   38862 kube-vip.go:115] generating kube-vip config ...
	I0815 17:41:07.252894   38862 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 17:41:07.264990   38862 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 17:41:07.265135   38862 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 17:41:07.265202   38862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:41:07.275209   38862 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:41:07.275261   38862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 17:41:07.284845   38862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0815 17:41:07.303077   38862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:41:07.321606   38862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0815 17:41:07.340251   38862 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 17:41:07.357392   38862 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 17:41:07.362239   38862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:41:07.504114   38862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:41:07.519790   38862 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878 for IP: 192.168.39.17
	I0815 17:41:07.519813   38862 certs.go:194] generating shared ca certs ...
	I0815 17:41:07.519832   38862 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:41:07.519984   38862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 17:41:07.520039   38862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 17:41:07.520052   38862 certs.go:256] generating profile certs ...
	I0815 17:41:07.520147   38862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key
	I0815 17:41:07.520180   38862 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.851b4a9f
	I0815 17:41:07.520207   38862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.851b4a9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.17 192.168.39.232 192.168.39.102 192.168.39.254]
	I0815 17:41:07.662140   38862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.851b4a9f ...
	I0815 17:41:07.662175   38862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.851b4a9f: {Name:mkc62a4226ba91a3e49d7701fd21f6207f0f0426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:41:07.662356   38862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.851b4a9f ...
	I0815 17:41:07.662373   38862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.851b4a9f: {Name:mkdf89b8e447a517bf45b20d7a57fddbe5d2b4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:41:07.662467   38862 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.851b4a9f -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt
	I0815 17:41:07.662644   38862 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.851b4a9f -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key
	I0815 17:41:07.662804   38862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key
	I0815 17:41:07.662820   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 17:41:07.662838   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 17:41:07.662854   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 17:41:07.662874   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 17:41:07.662893   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 17:41:07.662912   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 17:41:07.662930   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 17:41:07.662948   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 17:41:07.663008   38862 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 17:41:07.663049   38862 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 17:41:07.663062   38862 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 17:41:07.663107   38862 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 17:41:07.663142   38862 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:41:07.663173   38862 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 17:41:07.663226   38862 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:41:07.663264   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /usr/share/ca-certificates/202192.pem
	I0815 17:41:07.663286   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:41:07.663304   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem -> /usr/share/ca-certificates/20219.pem
	I0815 17:41:07.663820   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:41:07.693262   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:41:07.720390   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:41:07.746946   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 17:41:07.773341   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 17:41:07.799204   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 17:41:07.825823   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:41:07.853422   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:41:07.880957   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 17:41:07.908346   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:41:07.936051   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 17:41:07.960783   38862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 17:41:07.976951   38862 ssh_runner.go:195] Run: openssl version
	I0815 17:41:07.982704   38862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 17:41:07.993182   38862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 17:41:07.997597   38862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 17:41:07.997640   38862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 17:41:08.003245   38862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 17:41:08.013568   38862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 17:41:08.024551   38862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 17:41:08.029187   38862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 17:41:08.029232   38862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 17:41:08.035099   38862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:41:08.044659   38862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:41:08.055136   38862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:41:08.059625   38862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:41:08.059662   38862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:41:08.065385   38862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:41:08.074619   38862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:41:08.079138   38862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 17:41:08.088257   38862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 17:41:08.094041   38862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 17:41:08.099968   38862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 17:41:08.105752   38862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 17:41:08.111008   38862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 17:41:08.116355   38862 kubeadm.go:392] StartCluster: {Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:41:08.116513   38862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 17:41:08.116578   38862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 17:41:08.162470   38862 cri.go:89] found id: "f7ebf9d70ba5c61efd97508e79777cb3f39e8023f72ce96a5c2e17e64c015b46"
	I0815 17:41:08.162492   38862 cri.go:89] found id: "34d1790e226d4d4f4c8818c4700c96d66e0e17317dcf726dd7bca83a38f2574d"
	I0815 17:41:08.162497   38862 cri.go:89] found id: "43267532bd3a74eae62f14b5e2827a1722979ac5dae14e6ca9695963477cfb01"
	I0815 17:41:08.162502   38862 cri.go:89] found id: "f1cbca2356d05670475331f440acdbf693b96bdd7ab2a56ed7cb561f8a805f60"
	I0815 17:41:08.162506   38862 cri.go:89] found id: "e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e"
	I0815 17:41:08.162510   38862 cri.go:89] found id: "f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b"
	I0815 17:41:08.162514   38862 cri.go:89] found id: "78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480"
	I0815 17:41:08.162518   38862 cri.go:89] found id: "ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018"
	I0815 17:41:08.162522   38862 cri.go:89] found id: "b6c95bb7bfbe2c06a349a370026128c0969e39b88ce22dff5a060a42827c947b"
	I0815 17:41:08.162530   38862 cri.go:89] found id: "4d96eb3cf9f846f9c9ede73f8bbf8503748f3da80a8f919932ebe179f528d25b"
	I0815 17:41:08.162538   38862 cri.go:89] found id: "08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf"
	I0815 17:41:08.162542   38862 cri.go:89] found id: "d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f"
	I0815 17:41:08.162547   38862 cri.go:89] found id: "c6948597165c346c42890f5acaa78b26e33279be966f3dc48009b5d6699203d7"
	I0815 17:41:08.162551   38862 cri.go:89] found id: ""
	I0815 17:41:08.162597   38862 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 15 17:43:48 ha-683878 crio[3712]: time="2024-08-15 17:43:48.998634329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743828998610698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=56efe55a-411b-4522-9b74-effa1645a2ba name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:43:48 ha-683878 crio[3712]: time="2024-08-15 17:43:48.999209987Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1f0b3c3-657f-4e2b-929b-eb836bc61bd4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:43:48 ha-683878 crio[3712]: time="2024-08-15 17:43:48.999287241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1f0b3c3-657f-4e2b-929b-eb836bc61bd4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:48.999850407Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7991f99cc40d07ae4f9c6c10cdbbe4e3a2c44440825726548dd7f026cee9734,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723743774500268941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf2d808c645dae451dbe8682b457df0a3f073da398faffc19f22599def3aa8c8,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723743716504780858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c56aa67e4e2659a2eb6e8192b8b15c0490c238133ae3308e5fce281e058966,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723743714483852937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b5aa82aeff3a14d47e07f166dda30a6e1b96a5a598413fe9376287e1b6a852c,PodSandboxId:94417b32e4de91d8ef50c382d0a68b6b5ec3cda89c198729e6348b0f95b17abc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743707745885217,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46a73bdcce113c405d62e05427c26faa8f7ab836f86acd5a2a328dc30ceba75,PodSandboxId:c6a58ea11b976958fb1026bfe0a01c8474e0ad066646167ae5084553b6637fea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723743689297240919,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fa6d6c8257ff26c4035ba26d0d5a23,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd5ff6d7703f9642497550b06256b3eb8fb80a3892ba3ec0c698d9211d02912,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723743674473216689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4070ef99c378d5c5317666f05ce82a75603a4e8866bc82addec8bbec73b6a2ac,PodSandboxId:d09d6d98d32509c845c8ebba33c31e1fd7e86fbe8adda902c31f000ec2f7f050,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674681831745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2199424b0f8cc53a71e3ce0aadbe9cd7e1f69bac844b532d11cfda9f5debc,PodSandboxId:3e5d344a0cd57c86474a3fe1c522e5994d48e36d5fdb0ea67a56599637ce3e2c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723743674582559536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f08b099f496bb8b0a640998cd9a0724cfef6f168fba45b7bc274f8e2ed364c8,PodSandboxId:ad8dd7bbaa72409483ce2bce086fd68549c4224075ef0d94f7cb8a629e790376,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674667621055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10eb11ca402df2d31c60c5ac05592da27f89eac7a3f05847f371cf5d53018bac,PodSandboxId:19c7e4b9a3befcfad6acddb6cfd20c117a2ffe7a92ef4424d298ccc038809323,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723743674464261796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5c6f725a729b9cdbf1c96e63d9550f70855e20bbca143c47210bc88eea46e6,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723743674522632573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a59586f84f7320ad534cf9b8b26ad133299a4dd8af0be1df493985e2d27f1c,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723743674400895715,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78a9b7480fe83c2471c0c52fe754fdd2839373005031ff7aac548567ae98e20,PodSandboxId:33095ed4ba83900508889da7df45947b9ad377c0de1bf12db8a41d0f47dac0b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723743674370189001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7
390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18b204d856602d857da4e7fca7c22c800d964868e9cc8e3f627fd9fc6105f8e,PodSandboxId:ffedef4016532b63cccf05810f275ec9faf9b019133389ec85f7d346fd77677e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723743674350111680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Ann
otations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723743172239149837,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742979212938357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742978669129293,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723742965431489421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723742961580953975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723742950245879075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723742950264883053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e1f0b3c3-657f-4e2b-929b-eb836bc61bd4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.046684111Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5791cd66-f538-42ed-a217-d966d5639167 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.046760912Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5791cd66-f538-42ed-a217-d966d5639167 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.047854810Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0bce40bf-9a4e-438c-8659-c3ada65d8df8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.048682647Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743829048655521,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0bce40bf-9a4e-438c-8659-c3ada65d8df8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.049554128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5566f702-c9e9-4c01-a725-c7b40924a71d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.049627347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5566f702-c9e9-4c01-a725-c7b40924a71d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.050038094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7991f99cc40d07ae4f9c6c10cdbbe4e3a2c44440825726548dd7f026cee9734,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723743774500268941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf2d808c645dae451dbe8682b457df0a3f073da398faffc19f22599def3aa8c8,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723743716504780858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c56aa67e4e2659a2eb6e8192b8b15c0490c238133ae3308e5fce281e058966,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723743714483852937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b5aa82aeff3a14d47e07f166dda30a6e1b96a5a598413fe9376287e1b6a852c,PodSandboxId:94417b32e4de91d8ef50c382d0a68b6b5ec3cda89c198729e6348b0f95b17abc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743707745885217,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46a73bdcce113c405d62e05427c26faa8f7ab836f86acd5a2a328dc30ceba75,PodSandboxId:c6a58ea11b976958fb1026bfe0a01c8474e0ad066646167ae5084553b6637fea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723743689297240919,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fa6d6c8257ff26c4035ba26d0d5a23,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd5ff6d7703f9642497550b06256b3eb8fb80a3892ba3ec0c698d9211d02912,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723743674473216689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4070ef99c378d5c5317666f05ce82a75603a4e8866bc82addec8bbec73b6a2ac,PodSandboxId:d09d6d98d32509c845c8ebba33c31e1fd7e86fbe8adda902c31f000ec2f7f050,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674681831745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2199424b0f8cc53a71e3ce0aadbe9cd7e1f69bac844b532d11cfda9f5debc,PodSandboxId:3e5d344a0cd57c86474a3fe1c522e5994d48e36d5fdb0ea67a56599637ce3e2c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723743674582559536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f08b099f496bb8b0a640998cd9a0724cfef6f168fba45b7bc274f8e2ed364c8,PodSandboxId:ad8dd7bbaa72409483ce2bce086fd68549c4224075ef0d94f7cb8a629e790376,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674667621055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10eb11ca402df2d31c60c5ac05592da27f89eac7a3f05847f371cf5d53018bac,PodSandboxId:19c7e4b9a3befcfad6acddb6cfd20c117a2ffe7a92ef4424d298ccc038809323,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723743674464261796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5c6f725a729b9cdbf1c96e63d9550f70855e20bbca143c47210bc88eea46e6,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723743674522632573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a59586f84f7320ad534cf9b8b26ad133299a4dd8af0be1df493985e2d27f1c,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723743674400895715,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78a9b7480fe83c2471c0c52fe754fdd2839373005031ff7aac548567ae98e20,PodSandboxId:33095ed4ba83900508889da7df45947b9ad377c0de1bf12db8a41d0f47dac0b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723743674370189001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7
390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18b204d856602d857da4e7fca7c22c800d964868e9cc8e3f627fd9fc6105f8e,PodSandboxId:ffedef4016532b63cccf05810f275ec9faf9b019133389ec85f7d346fd77677e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723743674350111680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Ann
otations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723743172239149837,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742979212938357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742978669129293,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723742965431489421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723742961580953975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723742950245879075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723742950264883053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5566f702-c9e9-4c01-a725-c7b40924a71d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.109414370Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=102734f6-570c-434a-a7e3-ca7ff07bab7c name=/runtime.v1.RuntimeService/Version
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.109568420Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=102734f6-570c-434a-a7e3-ca7ff07bab7c name=/runtime.v1.RuntimeService/Version
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.111739565Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b9801472-0007-4a67-84e1-e6f41b5875d1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.112207432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743829112184628,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b9801472-0007-4a67-84e1-e6f41b5875d1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.112966904Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fad34893-422e-4fac-b49a-fbff97ffb00d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.113028214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fad34893-422e-4fac-b49a-fbff97ffb00d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.114388666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7991f99cc40d07ae4f9c6c10cdbbe4e3a2c44440825726548dd7f026cee9734,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723743774500268941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf2d808c645dae451dbe8682b457df0a3f073da398faffc19f22599def3aa8c8,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723743716504780858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c56aa67e4e2659a2eb6e8192b8b15c0490c238133ae3308e5fce281e058966,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723743714483852937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b5aa82aeff3a14d47e07f166dda30a6e1b96a5a598413fe9376287e1b6a852c,PodSandboxId:94417b32e4de91d8ef50c382d0a68b6b5ec3cda89c198729e6348b0f95b17abc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743707745885217,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46a73bdcce113c405d62e05427c26faa8f7ab836f86acd5a2a328dc30ceba75,PodSandboxId:c6a58ea11b976958fb1026bfe0a01c8474e0ad066646167ae5084553b6637fea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723743689297240919,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fa6d6c8257ff26c4035ba26d0d5a23,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd5ff6d7703f9642497550b06256b3eb8fb80a3892ba3ec0c698d9211d02912,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723743674473216689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4070ef99c378d5c5317666f05ce82a75603a4e8866bc82addec8bbec73b6a2ac,PodSandboxId:d09d6d98d32509c845c8ebba33c31e1fd7e86fbe8adda902c31f000ec2f7f050,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674681831745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2199424b0f8cc53a71e3ce0aadbe9cd7e1f69bac844b532d11cfda9f5debc,PodSandboxId:3e5d344a0cd57c86474a3fe1c522e5994d48e36d5fdb0ea67a56599637ce3e2c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723743674582559536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f08b099f496bb8b0a640998cd9a0724cfef6f168fba45b7bc274f8e2ed364c8,PodSandboxId:ad8dd7bbaa72409483ce2bce086fd68549c4224075ef0d94f7cb8a629e790376,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674667621055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10eb11ca402df2d31c60c5ac05592da27f89eac7a3f05847f371cf5d53018bac,PodSandboxId:19c7e4b9a3befcfad6acddb6cfd20c117a2ffe7a92ef4424d298ccc038809323,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723743674464261796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5c6f725a729b9cdbf1c96e63d9550f70855e20bbca143c47210bc88eea46e6,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723743674522632573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a59586f84f7320ad534cf9b8b26ad133299a4dd8af0be1df493985e2d27f1c,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723743674400895715,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78a9b7480fe83c2471c0c52fe754fdd2839373005031ff7aac548567ae98e20,PodSandboxId:33095ed4ba83900508889da7df45947b9ad377c0de1bf12db8a41d0f47dac0b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723743674370189001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7
390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18b204d856602d857da4e7fca7c22c800d964868e9cc8e3f627fd9fc6105f8e,PodSandboxId:ffedef4016532b63cccf05810f275ec9faf9b019133389ec85f7d346fd77677e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723743674350111680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Ann
otations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723743172239149837,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742979212938357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742978669129293,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723742965431489421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723742961580953975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723742950245879075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723742950264883053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fad34893-422e-4fac-b49a-fbff97ffb00d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.169276694Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad1d093b-f863-4b85-856b-cbccab4746c2 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.169374681Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad1d093b-f863-4b85-856b-cbccab4746c2 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.170309982Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=054e048f-c78d-4a8c-bfb0-0329c54c0338 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.171010006Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743829170984669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=054e048f-c78d-4a8c-bfb0-0329c54c0338 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.171788959Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9019ddab-b8f4-44c5-93f2-5362f21158aa name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.171886601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9019ddab-b8f4-44c5-93f2-5362f21158aa name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:43:49 ha-683878 crio[3712]: time="2024-08-15 17:43:49.172493182Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7991f99cc40d07ae4f9c6c10cdbbe4e3a2c44440825726548dd7f026cee9734,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723743774500268941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf2d808c645dae451dbe8682b457df0a3f073da398faffc19f22599def3aa8c8,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723743716504780858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c56aa67e4e2659a2eb6e8192b8b15c0490c238133ae3308e5fce281e058966,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723743714483852937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b5aa82aeff3a14d47e07f166dda30a6e1b96a5a598413fe9376287e1b6a852c,PodSandboxId:94417b32e4de91d8ef50c382d0a68b6b5ec3cda89c198729e6348b0f95b17abc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743707745885217,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46a73bdcce113c405d62e05427c26faa8f7ab836f86acd5a2a328dc30ceba75,PodSandboxId:c6a58ea11b976958fb1026bfe0a01c8474e0ad066646167ae5084553b6637fea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723743689297240919,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fa6d6c8257ff26c4035ba26d0d5a23,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd5ff6d7703f9642497550b06256b3eb8fb80a3892ba3ec0c698d9211d02912,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723743674473216689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4070ef99c378d5c5317666f05ce82a75603a4e8866bc82addec8bbec73b6a2ac,PodSandboxId:d09d6d98d32509c845c8ebba33c31e1fd7e86fbe8adda902c31f000ec2f7f050,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674681831745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2199424b0f8cc53a71e3ce0aadbe9cd7e1f69bac844b532d11cfda9f5debc,PodSandboxId:3e5d344a0cd57c86474a3fe1c522e5994d48e36d5fdb0ea67a56599637ce3e2c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723743674582559536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f08b099f496bb8b0a640998cd9a0724cfef6f168fba45b7bc274f8e2ed364c8,PodSandboxId:ad8dd7bbaa72409483ce2bce086fd68549c4224075ef0d94f7cb8a629e790376,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674667621055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10eb11ca402df2d31c60c5ac05592da27f89eac7a3f05847f371cf5d53018bac,PodSandboxId:19c7e4b9a3befcfad6acddb6cfd20c117a2ffe7a92ef4424d298ccc038809323,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723743674464261796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5c6f725a729b9cdbf1c96e63d9550f70855e20bbca143c47210bc88eea46e6,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723743674522632573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a59586f84f7320ad534cf9b8b26ad133299a4dd8af0be1df493985e2d27f1c,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723743674400895715,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78a9b7480fe83c2471c0c52fe754fdd2839373005031ff7aac548567ae98e20,PodSandboxId:33095ed4ba83900508889da7df45947b9ad377c0de1bf12db8a41d0f47dac0b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723743674370189001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7
390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18b204d856602d857da4e7fca7c22c800d964868e9cc8e3f627fd9fc6105f8e,PodSandboxId:ffedef4016532b63cccf05810f275ec9faf9b019133389ec85f7d346fd77677e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723743674350111680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Ann
otations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723743172239149837,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742979212938357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742978669129293,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723742965431489421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723742961580953975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723742950245879075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723742950264883053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9019ddab-b8f4-44c5-93f2-5362f21158aa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e7991f99cc40d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      54 seconds ago       Running             storage-provisioner       5                   98ceb7eec453d       storage-provisioner
	cf2d808c645da       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   2                   3437fd59bb98e       kube-controller-manager-ha-683878
	24c56aa67e4e2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            3                   fec6bf06ea559       kube-apiserver-ha-683878
	1b5aa82aeff3a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   94417b32e4de9       busybox-7dff88458-lgsr4
	f46a73bdcce11       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   c6a58ea11b976       kube-vip-ha-683878
	4070ef99c378d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   d09d6d98d3250       coredns-6f6b679f8f-kfczp
	2f08b099f496b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   ad8dd7bbaa724       coredns-6f6b679f8f-c5mlj
	9ab2199424b0f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   3e5d344a0cd57       etcd-ha-683878
	5d5c6f725a729       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      2 minutes ago        Exited              kube-controller-manager   1                   3437fd59bb98e       kube-controller-manager-ha-683878
	5fd5ff6d7703f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       4                   98ceb7eec453d       storage-provisioner
	10eb11ca402df       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   19c7e4b9a3bef       kindnet-g8lqf
	74a59586f84f7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      2 minutes ago        Exited              kube-apiserver            2                   fec6bf06ea559       kube-apiserver-ha-683878
	f78a9b7480fe8       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      2 minutes ago        Running             kube-scheduler            1                   33095ed4ba839       kube-scheduler-ha-683878
	d18b204d85660       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      2 minutes ago        Running             kube-proxy                1                   ffedef4016532       kube-proxy-s9hw4
	c22e0c68e353d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   a48e946a0189a       busybox-7dff88458-lgsr4
	e2d856610b1da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   96be386135521       coredns-6f6b679f8f-c5mlj
	f085f1327c68a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   0                   d330a801db93b       coredns-6f6b679f8f-kfczp
	78d6dea2ba166       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    14 minutes ago       Exited              kindnet-cni               0                   64e069f270f02       kindnet-g8lqf
	ea81ebf55447c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      14 minutes ago       Exited              kube-proxy                0                   209398e9569b4       kube-proxy-s9hw4
	08adcf281be8a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      14 minutes ago       Exited              etcd                      0                   b48feabdeccee       etcd-ha-683878
	d9b5d872cbe2c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      14 minutes ago       Exited              kube-scheduler            0                   a0ca28e1760aa       kube-scheduler-ha-683878
	
	
	==> coredns [2f08b099f496bb8b0a640998cd9a0724cfef6f168fba45b7bc274f8e2ed364c8] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40702->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1394359595]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 17:41:26.298) (total time: 10926ms):
	Trace[1394359595]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40702->10.96.0.1:443: read: connection reset by peer 10926ms (17:41:37.225)
	Trace[1394359595]: [10.926910792s] [10.926910792s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40702->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40726->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40726->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [4070ef99c378d5c5317666f05ce82a75603a4e8866bc82addec8bbec73b6a2ac] <==
	Trace[1447686097]: [10.6493328s] [10.6493328s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34312->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34324->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[806533450]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 17:41:26.786) (total time: 10439ms):
	Trace[806533450]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34324->10.96.0.1:443: read: connection reset by peer 10439ms (17:41:37.225)
	Trace[806533450]: [10.439430176s] [10.439430176s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34324->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e] <==
	[INFO] 10.244.1.2:33661 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00022092s
	[INFO] 10.244.0.4:37543 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001586797s
	[INFO] 10.244.0.4:39767 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147698s
	[INFO] 10.244.0.4:56644 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00111781s
	[INFO] 10.244.0.4:57862 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081256s
	[INFO] 10.244.2.2:39974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001814889s
	[INFO] 10.244.2.2:60048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001073479s
	[INFO] 10.244.2.2:59792 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116437s
	[INFO] 10.244.2.2:60453 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162311s
	[INFO] 10.244.2.2:38063 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074865s
	[INFO] 10.244.1.2:49382 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204795s
	[INFO] 10.244.0.4:49451 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020076s
	[INFO] 10.244.0.4:36025 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090742s
	[INFO] 10.244.1.2:40041 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120543s
	[INFO] 10.244.1.2:44246 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135148s
	[INFO] 10.244.1.2:49551 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109408s
	[INFO] 10.244.0.4:54048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242835s
	[INFO] 10.244.0.4:58043 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114208s
	[INFO] 10.244.0.4:57821 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014893s
	[INFO] 10.244.0.4:60055 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059928s
	[INFO] 10.244.2.2:59967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188473s
	[INFO] 10.244.2.2:46929 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000173466s
	[INFO] 10.244.2.2:40321 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103061s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b] <==
	[INFO] 10.244.1.2:57120 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172831s
	[INFO] 10.244.1.2:55849 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.014643038s
	[INFO] 10.244.1.2:47083 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161478s
	[INFO] 10.244.1.2:45144 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142497s
	[INFO] 10.244.1.2:41019 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147233s
	[INFO] 10.244.0.4:50547 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154587s
	[INFO] 10.244.0.4:60786 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018138s
	[INFO] 10.244.0.4:51598 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011869s
	[INFO] 10.244.0.4:59583 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005686s
	[INFO] 10.244.2.2:47444 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121752s
	[INFO] 10.244.2.2:46973 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092024s
	[INFO] 10.244.2.2:42492 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092653s
	[INFO] 10.244.1.2:38440 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00026281s
	[INFO] 10.244.1.2:50999 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076764s
	[INFO] 10.244.1.2:46163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107061s
	[INFO] 10.244.0.4:36567 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099261s
	[INFO] 10.244.0.4:51415 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079336s
	[INFO] 10.244.2.2:33646 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132168s
	[INFO] 10.244.2.2:41707 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123477s
	[INFO] 10.244.2.2:46838 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090831s
	[INFO] 10.244.2.2:46347 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071615s
	[INFO] 10.244.1.2:58233 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222961s
	[INFO] 10.244.2.2:37537 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108341s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-683878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T17_29_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:29:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:43:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:41:55 +0000   Thu, 15 Aug 2024 17:29:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:41:55 +0000   Thu, 15 Aug 2024 17:29:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:41:55 +0000   Thu, 15 Aug 2024 17:29:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:41:55 +0000   Thu, 15 Aug 2024 17:29:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-683878
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fae4a08d40d64f788bfe5305cfe9e22b
	  System UUID:                fae4a08d-40d6-4f78-8bfe-5305cfe9e22b
	  Boot ID:                    a20b912d-dbbf-42f1-bb62-642f6b4f28ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lgsr4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-c5mlj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-6f6b679f8f-kfczp             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-ha-683878                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-g8lqf                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-683878             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-683878    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-s9hw4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-683878             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-683878                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 112s                   kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-683878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-683878 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-683878 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           14m                    node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-683878 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Warning  ContainerGCFailed        3m33s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m56s (x3 over 3m45s)  kubelet          Node ha-683878 status is now: NodeNotReady
	  Normal   RegisteredNode           117s                   node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Normal   RegisteredNode           109s                   node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Normal   RegisteredNode           38s                    node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	
	
	Name:               ha-683878-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T17_31_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:31:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:43:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:42:39 +0000   Thu, 15 Aug 2024 17:41:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:42:39 +0000   Thu, 15 Aug 2024 17:41:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:42:39 +0000   Thu, 15 Aug 2024 17:41:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:42:39 +0000   Thu, 15 Aug 2024 17:41:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-683878-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f7afa772a5e433884c57e372a6611cf
	  System UUID:                8f7afa77-2a5e-4338-84c5-7e372a6611cf
	  Boot ID:                    412a74de-bdfe-4d63-9208-6db0cac96729
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j8h8r                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-683878-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-z5z9h                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-683878-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-683878-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-89p4v                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-683878-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-683878-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 82s                    kube-proxy       
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-683878-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-683878-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node ha-683878-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                    node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  NodeNotReady             9m12s                  node-controller  Node ha-683878-m02 status is now: NodeNotReady
	  Normal  Starting                 2m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node ha-683878-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node ha-683878-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m19s (x7 over 2m19s)  kubelet          Node ha-683878-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           117s                   node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  RegisteredNode           109s                   node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  RegisteredNode           38s                    node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	
	
	Name:               ha-683878-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T17_32_21_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:32:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:43:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:43:18 +0000   Thu, 15 Aug 2024 17:42:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:43:18 +0000   Thu, 15 Aug 2024 17:42:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:43:18 +0000   Thu, 15 Aug 2024 17:42:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:43:18 +0000   Thu, 15 Aug 2024 17:42:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.102
	  Hostname:    ha-683878-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2955de94b234fe7b9772686648cfdec
	  System UUID:                e2955de9-4b23-4fe7-b977-2686648cfdec
	  Boot ID:                    3828539f-1983-4113-ac22-27662d8c6288
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-sk47b                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-683878-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-6bccr                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-683878-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-683878-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-8bp98                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-683878-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-683878-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 46s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-683878-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-683878-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-683878-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-683878-m03 event: Registered Node ha-683878-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-683878-m03 event: Registered Node ha-683878-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-683878-m03 event: Registered Node ha-683878-m03 in Controller
	  Normal   RegisteredNode           117s               node-controller  Node ha-683878-m03 event: Registered Node ha-683878-m03 in Controller
	  Normal   RegisteredNode           109s               node-controller  Node ha-683878-m03 event: Registered Node ha-683878-m03 in Controller
	  Normal   NodeNotReady             77s                node-controller  Node ha-683878-m03 status is now: NodeNotReady
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 62s                kubelet          Node ha-683878-m03 has been rebooted, boot id: 3828539f-1983-4113-ac22-27662d8c6288
	  Normal   NodeReady                62s                kubelet          Node ha-683878-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  61s (x2 over 62s)  kubelet          Node ha-683878-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x2 over 62s)  kubelet          Node ha-683878-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x2 over 62s)  kubelet          Node ha-683878-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           38s                node-controller  Node ha-683878-m03 event: Registered Node ha-683878-m03 in Controller
	
	
	Name:               ha-683878-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T17_33_27_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:33:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:43:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:43:41 +0000   Thu, 15 Aug 2024 17:43:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:43:41 +0000   Thu, 15 Aug 2024 17:43:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:43:41 +0000   Thu, 15 Aug 2024 17:43:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:43:41 +0000   Thu, 15 Aug 2024 17:43:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-683878-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a40a481bcbcc4fd6871392be97e352cc
	  System UUID:                a40a481b-cbcc-4fd6-8713-92be97e352cc
	  Boot ID:                    caa91bce-e6d9-47c8-afcb-4be75bb819d5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-hmfn7       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-8clcw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-683878-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-683878-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-683878-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-683878-m04 status is now: NodeReady
	  Normal   RegisteredNode           117s               node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal   RegisteredNode           109s               node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal   NodeNotReady             77s                node-controller  Node ha-683878-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           38s                node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s                 kubelet          Node ha-683878-m04 has been rebooted, boot id: caa91bce-e6d9-47c8-afcb-4be75bb819d5
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-683878-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-683878-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-683878-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                8s                 kubelet          Node ha-683878-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.632720] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.064329] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054606] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[Aug15 17:29] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.110126] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.269301] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.960612] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.119022] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.056299] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.075028] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.095571] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.103797] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.010085] kauditd_printk_skb: 34 callbacks suppressed
	[ +22.762994] kauditd_printk_skb: 26 callbacks suppressed
	[Aug15 17:37] kauditd_printk_skb: 1 callbacks suppressed
	[Aug15 17:41] systemd-fstab-generator[3632]: Ignoring "noauto" option for root device
	[  +0.151940] systemd-fstab-generator[3644]: Ignoring "noauto" option for root device
	[  +0.175057] systemd-fstab-generator[3658]: Ignoring "noauto" option for root device
	[  +0.149888] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.285757] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
	[  +2.121327] systemd-fstab-generator[3800]: Ignoring "noauto" option for root device
	[  +6.531137] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.353045] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.054114] kauditd_printk_skb: 1 callbacks suppressed
	[Aug15 17:42] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf] <==
	{"level":"warn","ts":"2024-08-15T17:39:33.118851Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"66440227808963d1","rtt":"8.741609ms","error":"dial tcp 192.168.39.232:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-15T17:39:33.119023Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"66440227808963d1","rtt":"1.130813ms","error":"dial tcp 192.168.39.232:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-15T17:39:33.243529Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.17:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T17:39:33.243721Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.17:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T17:39:33.243847Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"2212c0bfe49c9415","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-15T17:39:33.244042Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244085Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244129Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244242Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244297Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244349Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244378Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244415Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.244532Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.244604Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.244732Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.244785Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.244834Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.244938Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.248167Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"warn","ts":"2024-08-15T17:39:33.248279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.158679556s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-15T17:39:33.248321Z","caller":"traceutil/trace.go:171","msg":"trace[953703048] range","detail":"{range_begin:; range_end:; }","duration":"9.158740682s","start":"2024-08-15T17:39:24.089572Z","end":"2024-08-15T17:39:33.248313Z","steps":["trace[953703048] 'agreement among raft nodes before linearized reading'  (duration: 9.158678025s)"],"step_count":1}
	{"level":"error","ts":"2024-08-15T17:39:33.248352Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-15T17:39:33.248597Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-08-15T17:39:33.248609Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-683878","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"]}
	
	
	==> etcd [9ab2199424b0f8cc53a71e3ce0aadbe9cd7e1f69bac844b532d11cfda9f5debc] <==
	{"level":"warn","ts":"2024-08-15T17:42:42.526380Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.102:2380/version","remote-member-id":"46deb178e6549eb8","error":"Get \"https://192.168.39.102:2380/version\": dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:42.526737Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"46deb178e6549eb8","error":"Get \"https://192.168.39.102:2380/version\": dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:45.751665Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"46deb178e6549eb8","rtt":"0s","error":"dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:45.753964Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"46deb178e6549eb8","rtt":"0s","error":"dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:46.528259Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.102:2380/version","remote-member-id":"46deb178e6549eb8","error":"Get \"https://192.168.39.102:2380/version\": dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:46.528416Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"46deb178e6549eb8","error":"Get \"https://192.168.39.102:2380/version\": dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:50.531013Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.102:2380/version","remote-member-id":"46deb178e6549eb8","error":"Get \"https://192.168.39.102:2380/version\": dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:50.531303Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"46deb178e6549eb8","error":"Get \"https://192.168.39.102:2380/version\": dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:50.752326Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"46deb178e6549eb8","rtt":"0s","error":"dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:50.754575Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"46deb178e6549eb8","rtt":"0s","error":"dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:54.534209Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.102:2380/version","remote-member-id":"46deb178e6549eb8","error":"Get \"https://192.168.39.102:2380/version\": dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:54.534599Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"46deb178e6549eb8","error":"Get \"https://192.168.39.102:2380/version\": dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:55.753278Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"46deb178e6549eb8","rtt":"0s","error":"dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:55.755536Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"46deb178e6549eb8","rtt":"0s","error":"dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:58.537922Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.102:2380/version","remote-member-id":"46deb178e6549eb8","error":"Get \"https://192.168.39.102:2380/version\": dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:42:58.538073Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"46deb178e6549eb8","error":"Get \"https://192.168.39.102:2380/version\": dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:43:00.753875Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"46deb178e6549eb8","rtt":"0s","error":"dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-15T17:43:00.756168Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"46deb178e6549eb8","rtt":"0s","error":"dial tcp 192.168.39.102:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-15T17:43:01.688525Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:43:01.688595Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:43:01.708005Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:43:01.716173Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2212c0bfe49c9415","to":"46deb178e6549eb8","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-15T17:43:01.716305Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:43:01.729275Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2212c0bfe49c9415","to":"46deb178e6549eb8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-15T17:43:01.729334Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	
	
	==> kernel <==
	 17:43:49 up 15 min,  0 users,  load average: 0.06, 0.36, 0.29
	Linux ha-683878 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [10eb11ca402df2d31c60c5ac05592da27f89eac7a3f05847f371cf5d53018bac] <==
	I0815 17:43:15.729604       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:43:25.732523       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:43:25.732557       1 main.go:299] handling current node
	I0815 17:43:25.732575       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:43:25.732585       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:43:25.732729       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:43:25.732762       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:43:25.732875       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:43:25.732905       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:43:35.738215       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:43:35.738275       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:43:35.738419       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:43:35.738511       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:43:35.738599       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:43:35.738634       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:43:35.738725       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:43:35.738759       1 main.go:299] handling current node
	I0815 17:43:45.728981       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:43:45.729022       1 main.go:299] handling current node
	I0815 17:43:45.729036       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:43:45.729043       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:43:45.729246       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:43:45.729417       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:43:45.729606       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:43:45.729652       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480] <==
	I0815 17:38:56.704382       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:39:06.704641       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:39:06.704700       1 main.go:299] handling current node
	I0815 17:39:06.704714       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:39:06.704720       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:39:06.704834       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:39:06.704857       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:39:06.704945       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:39:06.704967       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:39:16.708192       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:39:16.708234       1 main.go:299] handling current node
	I0815 17:39:16.708254       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:39:16.708260       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:39:16.708435       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:39:16.708515       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:39:16.708602       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:39:16.708622       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:39:26.704518       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:39:26.704546       1 main.go:299] handling current node
	I0815 17:39:26.704560       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:39:26.704564       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:39:26.704697       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:39:26.704712       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:39:26.704791       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:39:26.704796       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [24c56aa67e4e2659a2eb6e8192b8b15c0490c238133ae3308e5fce281e058966] <==
	I0815 17:41:56.719517       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0815 17:41:56.819643       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 17:41:56.820099       1 aggregator.go:171] initial CRD sync complete...
	I0815 17:41:56.820164       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 17:41:56.820206       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 17:41:56.870770       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 17:41:56.874116       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 17:41:56.878163       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 17:41:56.878224       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 17:41:56.883063       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 17:41:56.883378       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 17:41:56.908792       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 17:41:56.908825       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 17:41:56.915850       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 17:41:56.916025       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 17:41:56.916056       1 policy_source.go:224] refreshing policies
	I0815 17:41:56.929650       1 cache.go:39] Caches are synced for autoregister controller
	W0815 17:41:56.943133       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.232]
	I0815 17:41:56.946101       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 17:41:56.962540       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0815 17:41:56.970100       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0815 17:41:56.971224       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 17:41:57.683852       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 17:41:58.090277       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.17 192.168.39.232]
	W0815 17:42:08.089204       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.232]
	
	
	==> kube-apiserver [74a59586f84f7320ad534cf9b8b26ad133299a4dd8af0be1df493985e2d27f1c] <==
	I0815 17:41:15.010787       1 options.go:228] external host was not specified, using 192.168.39.17
	I0815 17:41:15.018644       1 server.go:142] Version: v1.31.0
	I0815 17:41:15.018744       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:41:15.990517       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0815 17:41:16.023623       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 17:41:16.032189       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 17:41:16.032264       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 17:41:16.032539       1 instance.go:232] Using reconciler: lease
	W0815 17:41:35.987893       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0815 17:41:35.987974       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0815 17:41:36.033607       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0815 17:41:36.033612       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [5d5c6f725a729b9cdbf1c96e63d9550f70855e20bbca143c47210bc88eea46e6] <==
	I0815 17:41:15.934850       1 serving.go:386] Generated self-signed cert in-memory
	I0815 17:41:16.922598       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 17:41:16.922638       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:41:16.924205       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 17:41:16.924603       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 17:41:16.924737       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 17:41:16.924865       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0815 17:41:37.039118       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.17:8443/healthz\": dial tcp 192.168.39.17:8443: connect: connection refused"
	
	
	==> kube-controller-manager [cf2d808c645dae451dbe8682b457df0a3f073da398faffc19f22599def3aa8c8] <==
	I0815 17:42:25.871718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="79.591µs"
	I0815 17:42:32.501706       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m03"
	I0815 17:42:32.501984       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:42:32.526859       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m03"
	I0815 17:42:32.530962       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:42:32.702718       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.295316ms"
	I0815 17:42:32.702915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="69.05µs"
	I0815 17:42:35.287388       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m03"
	I0815 17:42:37.769626       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:42:39.189976       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m02"
	I0815 17:42:45.370016       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:42:47.855203       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m03"
	I0815 17:42:47.950150       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m03"
	I0815 17:42:47.963555       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m03"
	I0815 17:42:49.008583       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="163.488µs"
	I0815 17:42:50.207711       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m03"
	I0815 17:43:10.441386       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.378331ms"
	I0815 17:43:10.441568       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.64µs"
	I0815 17:43:11.516295       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:43:11.608812       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:43:18.612362       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m03"
	I0815 17:43:41.605183       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-683878-m04"
	I0815 17:43:41.605773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:43:41.631078       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:43:42.714139       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	
	
	==> kube-proxy [d18b204d856602d857da4e7fca7c22c800d964868e9cc8e3f627fd9fc6105f8e] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 17:41:16.876685       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 17:41:19.945360       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 17:41:23.017716       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 17:41:29.160922       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 17:41:38.376920       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0815 17:41:56.672726       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.17"]
	E0815 17:41:56.677597       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:41:57.031258       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 17:41:57.031311       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 17:41:57.031340       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:41:57.037841       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:41:57.038148       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:41:57.038179       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:41:57.039896       1 config.go:197] "Starting service config controller"
	I0815 17:41:57.039954       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:41:57.039992       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:41:57.039997       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:41:57.040828       1 config.go:326] "Starting node config controller"
	I0815 17:41:57.040857       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:41:57.142031       1 shared_informer.go:320] Caches are synced for node config
	I0815 17:41:57.146594       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 17:41:57.146713       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018] <==
	E0815 17:38:21.769509       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:24.840773       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:24.840865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:24.840773       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:24.840907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:27.915283       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:27.915371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:30.986926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:30.987522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:30.987361       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:30.987810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:34.059296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:34.059406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:40.202841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:40.202932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:43.273041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:43.273137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:43.273265       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:43.273286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:58.633957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:58.634321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:39:04.778741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:39:04.779616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:39:10.923319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:39:10.923519       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f] <==
	I0815 17:32:48.191899       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lgsr4" node="ha-683878"
	E0815 17:33:26.612943       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dzspw\": pod kube-proxy-dzspw is already assigned to node \"ha-683878-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dzspw" node="ha-683878-m04"
	E0815 17:33:26.613188       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod eb8dfa16-0d1d-4ff8-8692-4268881e44c8(kube-system/kube-proxy-dzspw) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-dzspw"
	E0815 17:33:26.613271       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dzspw\": pod kube-proxy-dzspw is already assigned to node \"ha-683878-m04\"" pod="kube-system/kube-proxy-dzspw"
	I0815 17:33:26.613349       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dzspw" node="ha-683878-m04"
	E0815 17:33:26.634591       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hmfn7\": pod kindnet-hmfn7 is already assigned to node \"ha-683878-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hmfn7" node="ha-683878-m04"
	E0815 17:33:26.637167       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e58e4f5f-3ee5-4fa8-87c8-6caf24492efa(kube-system/kindnet-hmfn7) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-hmfn7"
	E0815 17:33:26.637925       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hmfn7\": pod kindnet-hmfn7 is already assigned to node \"ha-683878-m04\"" pod="kube-system/kindnet-hmfn7"
	I0815 17:33:26.638049       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hmfn7" node="ha-683878-m04"
	E0815 17:39:23.055696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0815 17:39:23.781930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0815 17:39:24.437850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0815 17:39:25.753215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0815 17:39:25.811419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0815 17:39:25.811713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0815 17:39:26.295620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0815 17:39:28.331511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0815 17:39:28.432167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0815 17:39:30.019565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0815 17:39:31.188017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0815 17:39:32.239768       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	I0815 17:39:32.931166       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0815 17:39:32.931721       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0815 17:39:32.932054       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0815 17:39:32.942197       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f78a9b7480fe83c2471c0c52fe754fdd2839373005031ff7aac548567ae98e20] <==
	W0815 17:41:47.373951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.17:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:47.374063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.17:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:47.434932       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.17:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:47.435053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.17:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:51.865310       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.17:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:51.865371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.17:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:52.004795       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.17:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:52.004976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.17:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:52.126634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.17:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:52.126835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.17:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:53.337654       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.17:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:53.337742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.17:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:53.717568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.17:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:53.717701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.17:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:56.732149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 17:41:56.732204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:41:56.732346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 17:41:56.732378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 17:41:56.734836       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 17:41:56.734886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:41:56.734950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 17:41:56.734983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:41:56.735037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 17:41:56.735065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 17:41:58.949992       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 17:42:36 ha-683878 kubelet[1316]: E0815 17:42:36.714600    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743756714189877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:42:36 ha-683878 kubelet[1316]: E0815 17:42:36.714872    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743756714189877,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:42:42 ha-683878 kubelet[1316]: I0815 17:42:42.474639    1316 scope.go:117] "RemoveContainer" containerID="5fd5ff6d7703f9642497550b06256b3eb8fb80a3892ba3ec0c698d9211d02912"
	Aug 15 17:42:42 ha-683878 kubelet[1316]: E0815 17:42:42.474957    1316 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(78d884cc-a5c3-4f94-b643-b6593cb3f622)\"" pod="kube-system/storage-provisioner" podUID="78d884cc-a5c3-4f94-b643-b6593cb3f622"
	Aug 15 17:42:46 ha-683878 kubelet[1316]: E0815 17:42:46.716418    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743766716116345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:42:46 ha-683878 kubelet[1316]: E0815 17:42:46.716506    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743766716116345,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:42:54 ha-683878 kubelet[1316]: I0815 17:42:54.474623    1316 scope.go:117] "RemoveContainer" containerID="5fd5ff6d7703f9642497550b06256b3eb8fb80a3892ba3ec0c698d9211d02912"
	Aug 15 17:42:55 ha-683878 kubelet[1316]: I0815 17:42:55.677595    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-683878" podStartSLOduration=22.677565169 podStartE2EDuration="22.677565169s" podCreationTimestamp="2024-08-15 17:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-15 17:42:36.507545144 +0000 UTC m=+800.175643507" watchObservedRunningTime="2024-08-15 17:42:55.677565169 +0000 UTC m=+819.345663534"
	Aug 15 17:42:56 ha-683878 kubelet[1316]: E0815 17:42:56.718790    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743776718349125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:42:56 ha-683878 kubelet[1316]: E0815 17:42:56.719184    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743776718349125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:43:06 ha-683878 kubelet[1316]: E0815 17:43:06.723283    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743786722651141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:43:06 ha-683878 kubelet[1316]: E0815 17:43:06.723751    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743786722651141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:43:16 ha-683878 kubelet[1316]: E0815 17:43:16.492891    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 17:43:16 ha-683878 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 17:43:16 ha-683878 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 17:43:16 ha-683878 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 17:43:16 ha-683878 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 17:43:16 ha-683878 kubelet[1316]: E0815 17:43:16.725337    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743796724740367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:43:16 ha-683878 kubelet[1316]: E0815 17:43:16.725519    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743796724740367,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:43:26 ha-683878 kubelet[1316]: E0815 17:43:26.728248    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743806727776145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:43:26 ha-683878 kubelet[1316]: E0815 17:43:26.728832    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743806727776145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:43:36 ha-683878 kubelet[1316]: E0815 17:43:36.730979    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743816730551794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:43:36 ha-683878 kubelet[1316]: E0815 17:43:36.731027    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743816730551794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:43:46 ha-683878 kubelet[1316]: E0815 17:43:46.733570    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743826732773694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:43:46 ha-683878 kubelet[1316]: E0815 17:43:46.733643    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743826732773694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 17:43:48.674460   40194 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19450-13013/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-683878 -n ha-683878
helpers_test.go:261: (dbg) Run:  kubectl --context ha-683878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (380.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 stop -v=7 --alsologtostderr
E0815 17:44:52.218460   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683878 stop -v=7 --alsologtostderr: exit status 82 (2m0.461112584s)

                                                
                                                
-- stdout --
	* Stopping node "ha-683878-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:44:07.854807   40606 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:44:07.855044   40606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:44:07.855053   40606 out.go:358] Setting ErrFile to fd 2...
	I0815 17:44:07.855057   40606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:44:07.855221   40606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:44:07.855422   40606 out.go:352] Setting JSON to false
	I0815 17:44:07.855498   40606 mustload.go:65] Loading cluster: ha-683878
	I0815 17:44:07.855820   40606 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:44:07.855909   40606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:44:07.856073   40606 mustload.go:65] Loading cluster: ha-683878
	I0815 17:44:07.856196   40606 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:44:07.856216   40606 stop.go:39] StopHost: ha-683878-m04
	I0815 17:44:07.856615   40606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:44:07.856660   40606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:44:07.871069   40606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34943
	I0815 17:44:07.871526   40606 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:44:07.872138   40606 main.go:141] libmachine: Using API Version  1
	I0815 17:44:07.872153   40606 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:44:07.872609   40606 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:44:07.874999   40606 out.go:177] * Stopping node "ha-683878-m04"  ...
	I0815 17:44:07.876532   40606 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 17:44:07.876559   40606 main.go:141] libmachine: (ha-683878-m04) Calling .DriverName
	I0815 17:44:07.876770   40606 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 17:44:07.876795   40606 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHHostname
	I0815 17:44:07.879457   40606 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:44:07.879816   40606 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:43:35 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:44:07.879849   40606 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:44:07.879977   40606 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHPort
	I0815 17:44:07.880126   40606 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHKeyPath
	I0815 17:44:07.880274   40606 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHUsername
	I0815 17:44:07.880388   40606 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m04/id_rsa Username:docker}
	I0815 17:44:07.967736   40606 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 17:44:08.020920   40606 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 17:44:08.073642   40606 main.go:141] libmachine: Stopping "ha-683878-m04"...
	I0815 17:44:08.073669   40606 main.go:141] libmachine: (ha-683878-m04) Calling .GetState
	I0815 17:44:08.075226   40606 main.go:141] libmachine: (ha-683878-m04) Calling .Stop
	I0815 17:44:08.078302   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 0/120
	I0815 17:44:09.079599   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 1/120
	I0815 17:44:10.081790   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 2/120
	I0815 17:44:11.083091   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 3/120
	I0815 17:44:12.084508   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 4/120
	I0815 17:44:13.086474   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 5/120
	I0815 17:44:14.087706   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 6/120
	I0815 17:44:15.089188   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 7/120
	I0815 17:44:16.090777   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 8/120
	I0815 17:44:17.091956   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 9/120
	I0815 17:44:18.094201   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 10/120
	I0815 17:44:19.095342   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 11/120
	I0815 17:44:20.096699   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 12/120
	I0815 17:44:21.098946   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 13/120
	I0815 17:44:22.100325   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 14/120
	I0815 17:44:23.102182   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 15/120
	I0815 17:44:24.103929   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 16/120
	I0815 17:44:25.105157   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 17/120
	I0815 17:44:26.107099   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 18/120
	I0815 17:44:27.108482   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 19/120
	I0815 17:44:28.110188   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 20/120
	I0815 17:44:29.111535   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 21/120
	I0815 17:44:30.112844   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 22/120
	I0815 17:44:31.114228   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 23/120
	I0815 17:44:32.115641   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 24/120
	I0815 17:44:33.117393   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 25/120
	I0815 17:44:34.118784   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 26/120
	I0815 17:44:35.120164   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 27/120
	I0815 17:44:36.121407   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 28/120
	I0815 17:44:37.123213   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 29/120
	I0815 17:44:38.125504   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 30/120
	I0815 17:44:39.126912   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 31/120
	I0815 17:44:40.128205   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 32/120
	I0815 17:44:41.129734   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 33/120
	I0815 17:44:42.131242   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 34/120
	I0815 17:44:43.133148   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 35/120
	I0815 17:44:44.134633   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 36/120
	I0815 17:44:45.136502   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 37/120
	I0815 17:44:46.137691   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 38/120
	I0815 17:44:47.138842   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 39/120
	I0815 17:44:48.140889   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 40/120
	I0815 17:44:49.142268   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 41/120
	I0815 17:44:50.144118   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 42/120
	I0815 17:44:51.145471   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 43/120
	I0815 17:44:52.147047   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 44/120
	I0815 17:44:53.148616   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 45/120
	I0815 17:44:54.150907   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 46/120
	I0815 17:44:55.152382   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 47/120
	I0815 17:44:56.153675   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 48/120
	I0815 17:44:57.155314   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 49/120
	I0815 17:44:58.157287   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 50/120
	I0815 17:44:59.159241   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 51/120
	I0815 17:45:00.160751   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 52/120
	I0815 17:45:01.163306   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 53/120
	I0815 17:45:02.164642   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 54/120
	I0815 17:45:03.166268   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 55/120
	I0815 17:45:04.167724   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 56/120
	I0815 17:45:05.169196   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 57/120
	I0815 17:45:06.171000   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 58/120
	I0815 17:45:07.172728   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 59/120
	I0815 17:45:08.174873   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 60/120
	I0815 17:45:09.176074   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 61/120
	I0815 17:45:10.177680   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 62/120
	I0815 17:45:11.179689   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 63/120
	I0815 17:45:12.181289   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 64/120
	I0815 17:45:13.183235   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 65/120
	I0815 17:45:14.184436   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 66/120
	I0815 17:45:15.185847   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 67/120
	I0815 17:45:16.187436   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 68/120
	I0815 17:45:17.189623   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 69/120
	I0815 17:45:18.191533   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 70/120
	I0815 17:45:19.192781   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 71/120
	I0815 17:45:20.194992   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 72/120
	I0815 17:45:21.196247   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 73/120
	I0815 17:45:22.197626   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 74/120
	I0815 17:45:23.199403   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 75/120
	I0815 17:45:24.200843   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 76/120
	I0815 17:45:25.202063   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 77/120
	I0815 17:45:26.203528   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 78/120
	I0815 17:45:27.204936   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 79/120
	I0815 17:45:28.206653   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 80/120
	I0815 17:45:29.208047   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 81/120
	I0815 17:45:30.210620   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 82/120
	I0815 17:45:31.212336   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 83/120
	I0815 17:45:32.213999   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 84/120
	I0815 17:45:33.216168   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 85/120
	I0815 17:45:34.217468   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 86/120
	I0815 17:45:35.218781   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 87/120
	I0815 17:45:36.220578   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 88/120
	I0815 17:45:37.221908   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 89/120
	I0815 17:45:38.223851   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 90/120
	I0815 17:45:39.225218   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 91/120
	I0815 17:45:40.226647   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 92/120
	I0815 17:45:41.228231   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 93/120
	I0815 17:45:42.229528   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 94/120
	I0815 17:45:43.231211   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 95/120
	I0815 17:45:44.232456   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 96/120
	I0815 17:45:45.233653   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 97/120
	I0815 17:45:46.234903   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 98/120
	I0815 17:45:47.236629   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 99/120
	I0815 17:45:48.238682   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 100/120
	I0815 17:45:49.239904   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 101/120
	I0815 17:45:50.241331   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 102/120
	I0815 17:45:51.242871   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 103/120
	I0815 17:45:52.244093   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 104/120
	I0815 17:45:53.245260   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 105/120
	I0815 17:45:54.246666   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 106/120
	I0815 17:45:55.247947   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 107/120
	I0815 17:45:56.249239   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 108/120
	I0815 17:45:57.250600   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 109/120
	I0815 17:45:58.252616   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 110/120
	I0815 17:45:59.253753   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 111/120
	I0815 17:46:00.255045   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 112/120
	I0815 17:46:01.256335   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 113/120
	I0815 17:46:02.257626   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 114/120
	I0815 17:46:03.259837   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 115/120
	I0815 17:46:04.261350   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 116/120
	I0815 17:46:05.262689   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 117/120
	I0815 17:46:06.264783   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 118/120
	I0815 17:46:07.266885   40606 main.go:141] libmachine: (ha-683878-m04) Waiting for machine to stop 119/120
	I0815 17:46:08.267495   40606 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 17:46:08.267571   40606 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0815 17:46:08.269303   40606 out.go:201] 
	W0815 17:46:08.270666   40606 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0815 17:46:08.270685   40606 out.go:270] * 
	* 
	W0815 17:46:08.273053   40606 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:46:08.274261   40606 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-683878 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr: exit status 3 (19.004090406s)

                                                
                                                
-- stdout --
	ha-683878
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-683878-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:46:08.317782   41025 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:46:08.317884   41025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:46:08.317893   41025 out.go:358] Setting ErrFile to fd 2...
	I0815 17:46:08.317897   41025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:46:08.318066   41025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:46:08.318213   41025 out.go:352] Setting JSON to false
	I0815 17:46:08.318237   41025 mustload.go:65] Loading cluster: ha-683878
	I0815 17:46:08.318332   41025 notify.go:220] Checking for updates...
	I0815 17:46:08.318588   41025 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:46:08.318602   41025 status.go:255] checking status of ha-683878 ...
	I0815 17:46:08.318969   41025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:46:08.319021   41025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:46:08.338156   41025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39121
	I0815 17:46:08.338556   41025 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:46:08.339198   41025 main.go:141] libmachine: Using API Version  1
	I0815 17:46:08.339227   41025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:46:08.339588   41025 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:46:08.339792   41025 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:46:08.341506   41025 status.go:330] ha-683878 host status = "Running" (err=<nil>)
	I0815 17:46:08.341520   41025 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:46:08.341815   41025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:46:08.341857   41025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:46:08.356909   41025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33441
	I0815 17:46:08.357293   41025 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:46:08.357719   41025 main.go:141] libmachine: Using API Version  1
	I0815 17:46:08.357737   41025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:46:08.358024   41025 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:46:08.358215   41025 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:46:08.360897   41025 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:46:08.361317   41025 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:46:08.361351   41025 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:46:08.361498   41025 host.go:66] Checking if "ha-683878" exists ...
	I0815 17:46:08.361884   41025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:46:08.361936   41025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:46:08.376049   41025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40223
	I0815 17:46:08.376422   41025 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:46:08.376826   41025 main.go:141] libmachine: Using API Version  1
	I0815 17:46:08.376849   41025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:46:08.377139   41025 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:46:08.377328   41025 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:46:08.377500   41025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:46:08.377523   41025 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:46:08.380109   41025 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:46:08.380572   41025 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:46:08.380599   41025 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:46:08.380756   41025 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:46:08.380918   41025 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:46:08.381046   41025 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:46:08.381173   41025 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:46:08.462058   41025 ssh_runner.go:195] Run: systemctl --version
	I0815 17:46:08.469156   41025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:46:08.489576   41025 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:46:08.489607   41025 api_server.go:166] Checking apiserver status ...
	I0815 17:46:08.489651   41025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:46:08.506868   41025 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4940/cgroup
	W0815 17:46:08.526732   41025 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4940/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:46:08.526782   41025 ssh_runner.go:195] Run: ls
	I0815 17:46:08.531467   41025 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:46:08.535662   41025 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:46:08.535681   41025 status.go:422] ha-683878 apiserver status = Running (err=<nil>)
	I0815 17:46:08.535692   41025 status.go:257] ha-683878 status: &{Name:ha-683878 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:46:08.535712   41025 status.go:255] checking status of ha-683878-m02 ...
	I0815 17:46:08.536013   41025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:46:08.536048   41025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:46:08.550990   41025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39817
	I0815 17:46:08.551429   41025 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:46:08.551916   41025 main.go:141] libmachine: Using API Version  1
	I0815 17:46:08.551942   41025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:46:08.552249   41025 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:46:08.552431   41025 main.go:141] libmachine: (ha-683878-m02) Calling .GetState
	I0815 17:46:08.554032   41025 status.go:330] ha-683878-m02 host status = "Running" (err=<nil>)
	I0815 17:46:08.554046   41025 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:46:08.554318   41025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:46:08.554360   41025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:46:08.568416   41025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40487
	I0815 17:46:08.568791   41025 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:46:08.569207   41025 main.go:141] libmachine: Using API Version  1
	I0815 17:46:08.569227   41025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:46:08.569545   41025 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:46:08.569730   41025 main.go:141] libmachine: (ha-683878-m02) Calling .GetIP
	I0815 17:46:08.572482   41025 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:46:08.572863   41025 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:41:19 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:46:08.572890   41025 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:46:08.573030   41025 host.go:66] Checking if "ha-683878-m02" exists ...
	I0815 17:46:08.573344   41025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:46:08.573382   41025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:46:08.587535   41025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43187
	I0815 17:46:08.587961   41025 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:46:08.588462   41025 main.go:141] libmachine: Using API Version  1
	I0815 17:46:08.588484   41025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:46:08.588809   41025 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:46:08.588984   41025 main.go:141] libmachine: (ha-683878-m02) Calling .DriverName
	I0815 17:46:08.589142   41025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:46:08.589165   41025 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHHostname
	I0815 17:46:08.592149   41025 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:46:08.592564   41025 main.go:141] libmachine: (ha-683878-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:85:ab:06", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:41:19 +0000 UTC Type:0 Mac:52:54:00:85:ab:06 Iaid: IPaddr:192.168.39.232 Prefix:24 Hostname:ha-683878-m02 Clientid:01:52:54:00:85:ab:06}
	I0815 17:46:08.592591   41025 main.go:141] libmachine: (ha-683878-m02) DBG | domain ha-683878-m02 has defined IP address 192.168.39.232 and MAC address 52:54:00:85:ab:06 in network mk-ha-683878
	I0815 17:46:08.592672   41025 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHPort
	I0815 17:46:08.592814   41025 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHKeyPath
	I0815 17:46:08.592937   41025 main.go:141] libmachine: (ha-683878-m02) Calling .GetSSHUsername
	I0815 17:46:08.593048   41025 sshutil.go:53] new ssh client: &{IP:192.168.39.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m02/id_rsa Username:docker}
	I0815 17:46:08.681210   41025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:46:08.699305   41025 kubeconfig.go:125] found "ha-683878" server: "https://192.168.39.254:8443"
	I0815 17:46:08.699328   41025 api_server.go:166] Checking apiserver status ...
	I0815 17:46:08.699359   41025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:46:08.714067   41025 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1594/cgroup
	W0815 17:46:08.724557   41025 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1594/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:46:08.724633   41025 ssh_runner.go:195] Run: ls
	I0815 17:46:08.728852   41025 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0815 17:46:08.732854   41025 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0815 17:46:08.732878   41025 status.go:422] ha-683878-m02 apiserver status = Running (err=<nil>)
	I0815 17:46:08.732889   41025 status.go:257] ha-683878-m02 status: &{Name:ha-683878-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:46:08.732907   41025 status.go:255] checking status of ha-683878-m04 ...
	I0815 17:46:08.733184   41025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:46:08.733214   41025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:46:08.747810   41025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33987
	I0815 17:46:08.748192   41025 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:46:08.748606   41025 main.go:141] libmachine: Using API Version  1
	I0815 17:46:08.748624   41025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:46:08.748910   41025 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:46:08.749086   41025 main.go:141] libmachine: (ha-683878-m04) Calling .GetState
	I0815 17:46:08.750455   41025 status.go:330] ha-683878-m04 host status = "Running" (err=<nil>)
	I0815 17:46:08.750468   41025 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:46:08.750754   41025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:46:08.750784   41025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:46:08.765022   41025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39093
	I0815 17:46:08.765416   41025 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:46:08.765834   41025 main.go:141] libmachine: Using API Version  1
	I0815 17:46:08.765854   41025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:46:08.766127   41025 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:46:08.766285   41025 main.go:141] libmachine: (ha-683878-m04) Calling .GetIP
	I0815 17:46:08.768574   41025 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:46:08.768954   41025 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:43:35 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:46:08.768986   41025 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:46:08.769149   41025 host.go:66] Checking if "ha-683878-m04" exists ...
	I0815 17:46:08.769501   41025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:46:08.769544   41025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:46:08.783891   41025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41339
	I0815 17:46:08.784294   41025 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:46:08.784817   41025 main.go:141] libmachine: Using API Version  1
	I0815 17:46:08.784840   41025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:46:08.785113   41025 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:46:08.785285   41025 main.go:141] libmachine: (ha-683878-m04) Calling .DriverName
	I0815 17:46:08.785463   41025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:46:08.785479   41025 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHHostname
	I0815 17:46:08.787603   41025 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:46:08.787974   41025 main.go:141] libmachine: (ha-683878-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:76:a0", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:43:35 +0000 UTC Type:0 Mac:52:54:00:67:76:a0 Iaid: IPaddr:192.168.39.105 Prefix:24 Hostname:ha-683878-m04 Clientid:01:52:54:00:67:76:a0}
	I0815 17:46:08.788004   41025 main.go:141] libmachine: (ha-683878-m04) DBG | domain ha-683878-m04 has defined IP address 192.168.39.105 and MAC address 52:54:00:67:76:a0 in network mk-ha-683878
	I0815 17:46:08.788120   41025 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHPort
	I0815 17:46:08.788278   41025 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHKeyPath
	I0815 17:46:08.788426   41025 main.go:141] libmachine: (ha-683878-m04) Calling .GetSSHUsername
	I0815 17:46:08.788591   41025 sshutil.go:53] new ssh client: &{IP:192.168.39.105 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878-m04/id_rsa Username:docker}
	W0815 17:46:27.280714   41025 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.105:22: connect: no route to host
	W0815 17:46:27.280816   41025 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.105:22: connect: no route to host
	E0815 17:46:27.280832   41025 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.105:22: connect: no route to host
	I0815 17:46:27.280838   41025 status.go:257] ha-683878-m04 status: &{Name:ha-683878-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0815 17:46:27.280863   41025 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.105:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-683878 -n ha-683878
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-683878 logs -n 25: (1.691160565s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-683878 ssh -n ha-683878-m02 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m03_ha-683878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04:/home/docker/cp-test_ha-683878-m03_ha-683878-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m04 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m03_ha-683878-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-683878 cp testdata/cp-test.txt                                                | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3030958127/001/cp-test_ha-683878-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878:/home/docker/cp-test_ha-683878-m04_ha-683878.txt                       |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878 sudo cat                                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m04_ha-683878.txt                                 |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m02:/home/docker/cp-test_ha-683878-m04_ha-683878-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m02 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m04_ha-683878-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m03:/home/docker/cp-test_ha-683878-m04_ha-683878-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n                                                                 | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | ha-683878-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-683878 ssh -n ha-683878-m03 sudo cat                                          | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC | 15 Aug 24 17:34 UTC |
	|         | /home/docker/cp-test_ha-683878-m04_ha-683878-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-683878 node stop m02 -v=7                                                     | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:34 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-683878 node start m02 -v=7                                                    | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:36 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-683878 -v=7                                                           | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-683878 -v=7                                                                | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:37 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-683878 --wait=true -v=7                                                    | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:39 UTC | 15 Aug 24 17:43 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-683878                                                                | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:43 UTC |                     |
	| node    | ha-683878 node delete m03 -v=7                                                   | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:43 UTC | 15 Aug 24 17:44 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-683878 stop -v=7                                                              | ha-683878 | jenkins | v1.33.1 | 15 Aug 24 17:44 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:39:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:39:32.069104   38862 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:39:32.069562   38862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:39:32.069587   38862 out.go:358] Setting ErrFile to fd 2...
	I0815 17:39:32.069597   38862 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:39:32.070015   38862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:39:32.070791   38862 out.go:352] Setting JSON to false
	I0815 17:39:32.071689   38862 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4918,"bootTime":1723738654,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:39:32.071741   38862 start.go:139] virtualization: kvm guest
	I0815 17:39:32.073756   38862 out.go:177] * [ha-683878] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:39:32.075182   38862 notify.go:220] Checking for updates...
	I0815 17:39:32.075203   38862 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:39:32.076463   38862 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:39:32.077562   38862 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:39:32.078796   38862 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:39:32.080211   38862 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:39:32.081550   38862 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:39:32.083084   38862 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:39:32.083208   38862 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:39:32.083639   38862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:39:32.083685   38862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:39:32.099179   38862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I0815 17:39:32.099621   38862 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:39:32.100084   38862 main.go:141] libmachine: Using API Version  1
	I0815 17:39:32.100106   38862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:39:32.100401   38862 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:39:32.100576   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:39:32.137598   38862 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 17:39:32.138788   38862 start.go:297] selected driver: kvm2
	I0815 17:39:32.138812   38862 start.go:901] validating driver "kvm2" against &{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:39:32.138949   38862 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:39:32.139293   38862 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:39:32.139400   38862 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 17:39:32.154124   38862 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 17:39:32.154785   38862 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:39:32.154839   38862 cni.go:84] Creating CNI manager for ""
	I0815 17:39:32.154851   38862 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 17:39:32.154909   38862 start.go:340] cluster config:
	{Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:39:32.155029   38862 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:39:32.156944   38862 out.go:177] * Starting "ha-683878" primary control-plane node in "ha-683878" cluster
	I0815 17:39:32.158384   38862 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:39:32.158410   38862 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:39:32.158415   38862 cache.go:56] Caching tarball of preloaded images
	I0815 17:39:32.158477   38862 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 17:39:32.158487   38862 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 17:39:32.158595   38862 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/config.json ...
	I0815 17:39:32.158797   38862 start.go:360] acquireMachinesLock for ha-683878: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 17:39:32.158835   38862 start.go:364] duration metric: took 21.151µs to acquireMachinesLock for "ha-683878"
	I0815 17:39:32.158849   38862 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:39:32.158858   38862 fix.go:54] fixHost starting: 
	I0815 17:39:32.159090   38862 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:39:32.159117   38862 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:39:32.172822   38862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41557
	I0815 17:39:32.173320   38862 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:39:32.173780   38862 main.go:141] libmachine: Using API Version  1
	I0815 17:39:32.173816   38862 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:39:32.174122   38862 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:39:32.174338   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:39:32.174484   38862 main.go:141] libmachine: (ha-683878) Calling .GetState
	I0815 17:39:32.176017   38862 fix.go:112] recreateIfNeeded on ha-683878: state=Running err=<nil>
	W0815 17:39:32.176049   38862 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:39:32.177867   38862 out.go:177] * Updating the running kvm2 "ha-683878" VM ...
	I0815 17:39:32.179230   38862 machine.go:93] provisionDockerMachine start ...
	I0815 17:39:32.179248   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:39:32.179429   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:39:32.181659   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.182047   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.182070   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.182186   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:39:32.182342   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.182480   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.182594   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:39:32.182786   38862 main.go:141] libmachine: Using SSH client type: native
	I0815 17:39:32.182994   38862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:39:32.183009   38862 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 17:39:32.293580   38862 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683878
	
	I0815 17:39:32.293607   38862 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:39:32.293821   38862 buildroot.go:166] provisioning hostname "ha-683878"
	I0815 17:39:32.293849   38862 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:39:32.294039   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:39:32.296541   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.296998   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.297026   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.297183   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:39:32.297349   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.297504   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.297635   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:39:32.297780   38862 main.go:141] libmachine: Using SSH client type: native
	I0815 17:39:32.297926   38862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:39:32.297937   38862 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-683878 && echo "ha-683878" | sudo tee /etc/hostname
	I0815 17:39:32.411697   38862 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-683878
	
	I0815 17:39:32.411722   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:39:32.414475   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.414970   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.415001   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.415137   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:39:32.415309   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.415483   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.415627   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:39:32.415769   38862 main.go:141] libmachine: Using SSH client type: native
	I0815 17:39:32.415955   38862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:39:32.415978   38862 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-683878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-683878/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-683878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:39:32.522360   38862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:39:32.522387   38862 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 17:39:32.522415   38862 buildroot.go:174] setting up certificates
	I0815 17:39:32.522426   38862 provision.go:84] configureAuth start
	I0815 17:39:32.522438   38862 main.go:141] libmachine: (ha-683878) Calling .GetMachineName
	I0815 17:39:32.522675   38862 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:39:32.525128   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.525490   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.525507   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.525674   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:39:32.527712   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.528019   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.528046   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.528175   38862 provision.go:143] copyHostCerts
	I0815 17:39:32.528207   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:39:32.528245   38862 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 17:39:32.528265   38862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 17:39:32.528344   38862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 17:39:32.528442   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:39:32.528467   38862 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 17:39:32.528474   38862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 17:39:32.528530   38862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 17:39:32.528592   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:39:32.528617   38862 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 17:39:32.528624   38862 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 17:39:32.528664   38862 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 17:39:32.528774   38862 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.ha-683878 san=[127.0.0.1 192.168.39.17 ha-683878 localhost minikube]
	I0815 17:39:32.636345   38862 provision.go:177] copyRemoteCerts
	I0815 17:39:32.636413   38862 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:39:32.636441   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:39:32.639099   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.639460   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.639483   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.639665   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:39:32.639810   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.639952   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:39:32.640085   38862 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:39:32.726334   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 17:39:32.726405   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 17:39:32.754539   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 17:39:32.754606   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0815 17:39:32.783780   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 17:39:32.783852   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 17:39:32.811335   38862 provision.go:87] duration metric: took 288.899387ms to configureAuth
	I0815 17:39:32.811359   38862 buildroot.go:189] setting minikube options for container-runtime
	I0815 17:39:32.811576   38862 config.go:182] Loaded profile config "ha-683878": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:39:32.811662   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:39:32.814396   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.814723   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:39:32.814738   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:39:32.814972   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:39:32.815132   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.815263   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:39:32.815387   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:39:32.815599   38862 main.go:141] libmachine: Using SSH client type: native
	I0815 17:39:32.815796   38862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:39:32.815811   38862 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 17:41:03.775800   38862 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 17:41:03.775827   38862 machine.go:96] duration metric: took 1m31.59658408s to provisionDockerMachine
	I0815 17:41:03.775840   38862 start.go:293] postStartSetup for "ha-683878" (driver="kvm2")
	I0815 17:41:03.775851   38862 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:41:03.775867   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:41:03.776176   38862 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:41:03.776208   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:41:03.779391   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:03.779889   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:03.779915   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:03.780087   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:41:03.780312   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:41:03.780521   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:41:03.780655   38862 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:41:03.864811   38862 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:41:03.869241   38862 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 17:41:03.869267   38862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 17:41:03.869331   38862 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 17:41:03.869426   38862 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 17:41:03.869436   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /etc/ssl/certs/202192.pem
	I0815 17:41:03.869525   38862 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:41:03.879341   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:41:03.903841   38862 start.go:296] duration metric: took 127.986478ms for postStartSetup
	I0815 17:41:03.903886   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:41:03.904208   38862 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0815 17:41:03.904237   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:41:03.906970   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:03.907384   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:03.907413   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:03.907575   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:41:03.907732   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:41:03.907861   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:41:03.908025   38862 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	W0815 17:41:03.987297   38862 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0815 17:41:03.987321   38862 fix.go:56] duration metric: took 1m31.828466007s for fixHost
	I0815 17:41:03.987343   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:41:03.990266   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:03.990664   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:03.990706   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:03.990804   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:41:03.991015   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:41:03.991185   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:41:03.991312   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:41:03.991500   38862 main.go:141] libmachine: Using SSH client type: native
	I0815 17:41:03.991696   38862 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.17 22 <nil> <nil>}
	I0815 17:41:03.991707   38862 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 17:41:04.121545   38862 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723743664.087354690
	
	I0815 17:41:04.121568   38862 fix.go:216] guest clock: 1723743664.087354690
	I0815 17:41:04.121577   38862 fix.go:229] Guest: 2024-08-15 17:41:04.08735469 +0000 UTC Remote: 2024-08-15 17:41:03.987328736 +0000 UTC m=+91.951042500 (delta=100.025954ms)
	I0815 17:41:04.121624   38862 fix.go:200] guest clock delta is within tolerance: 100.025954ms
	I0815 17:41:04.121630   38862 start.go:83] releasing machines lock for "ha-683878", held for 1m31.962786473s
	I0815 17:41:04.121649   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:41:04.121905   38862 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:41:04.124499   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:04.124877   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:04.124901   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:04.125053   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:41:04.125502   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:41:04.125640   38862 main.go:141] libmachine: (ha-683878) Calling .DriverName
	I0815 17:41:04.125735   38862 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:41:04.125764   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:41:04.125896   38862 ssh_runner.go:195] Run: cat /version.json
	I0815 17:41:04.125921   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHHostname
	I0815 17:41:04.128271   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:04.128564   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:04.128654   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:04.128672   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:04.128847   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:41:04.128948   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:04.128980   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:04.129022   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:41:04.129169   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHPort
	I0815 17:41:04.129179   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:41:04.129432   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHKeyPath
	I0815 17:41:04.129451   38862 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:41:04.129602   38862 main.go:141] libmachine: (ha-683878) Calling .GetSSHUsername
	I0815 17:41:04.129752   38862 sshutil.go:53] new ssh client: &{IP:192.168.39.17 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/ha-683878/id_rsa Username:docker}
	I0815 17:41:04.226027   38862 ssh_runner.go:195] Run: systemctl --version
	I0815 17:41:04.232302   38862 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 17:41:04.397720   38862 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 17:41:04.407837   38862 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 17:41:04.407892   38862 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:41:04.417727   38862 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 17:41:04.417748   38862 start.go:495] detecting cgroup driver to use...
	I0815 17:41:04.417825   38862 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 17:41:04.433755   38862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 17:41:04.447914   38862 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:41:04.447963   38862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:41:04.461662   38862 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:41:04.475867   38862 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:41:04.621379   38862 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:41:04.764995   38862 docker.go:233] disabling docker service ...
	I0815 17:41:04.765069   38862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:41:04.783080   38862 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:41:04.797627   38862 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:41:04.943292   38862 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:41:05.102228   38862 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:41:05.116223   38862 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:41:05.134362   38862 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 17:41:05.134425   38862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.144888   38862 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 17:41:05.144938   38862 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.155308   38862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.165401   38862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.175521   38862 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:41:05.186012   38862 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.196121   38862 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.207182   38862 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 17:41:05.217349   38862 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:41:05.226638   38862 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:41:05.235732   38862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:41:05.378170   38862 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 17:41:06.975320   38862 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.597120855s)
	I0815 17:41:06.975346   38862 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 17:41:06.975386   38862 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 17:41:06.980951   38862 start.go:563] Will wait 60s for crictl version
	I0815 17:41:06.981009   38862 ssh_runner.go:195] Run: which crictl
	I0815 17:41:06.985245   38862 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:41:07.027061   38862 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 17:41:07.027152   38862 ssh_runner.go:195] Run: crio --version
	I0815 17:41:07.058984   38862 ssh_runner.go:195] Run: crio --version
	I0815 17:41:07.087834   38862 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 17:41:07.089529   38862 main.go:141] libmachine: (ha-683878) Calling .GetIP
	I0815 17:41:07.092155   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:07.092586   38862 main.go:141] libmachine: (ha-683878) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:fe:4b:82", ip: ""} in network mk-ha-683878: {Iface:virbr1 ExpiryTime:2024-08-15 18:28:49 +0000 UTC Type:0 Mac:52:54:00:fe:4b:82 Iaid: IPaddr:192.168.39.17 Prefix:24 Hostname:ha-683878 Clientid:01:52:54:00:fe:4b:82}
	I0815 17:41:07.092609   38862 main.go:141] libmachine: (ha-683878) DBG | domain ha-683878 has defined IP address 192.168.39.17 and MAC address 52:54:00:fe:4b:82 in network mk-ha-683878
	I0815 17:41:07.092812   38862 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 17:41:07.097529   38862 kubeadm.go:883] updating cluster {Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Cl
usterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 17:41:07.097647   38862 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:41:07.097688   38862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:41:07.150944   38862 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:41:07.150963   38862 crio.go:433] Images already preloaded, skipping extraction
	I0815 17:41:07.151006   38862 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:41:07.191861   38862 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 17:41:07.191881   38862 cache_images.go:84] Images are preloaded, skipping loading
	I0815 17:41:07.191890   38862 kubeadm.go:934] updating node { 192.168.39.17 8443 v1.31.0 crio true true} ...
	I0815 17:41:07.191991   38862 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-683878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:41:07.192058   38862 ssh_runner.go:195] Run: crio config
	I0815 17:41:07.252553   38862 cni.go:84] Creating CNI manager for ""
	I0815 17:41:07.252575   38862 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0815 17:41:07.252588   38862 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 17:41:07.252623   38862 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.17 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-683878 NodeName:ha-683878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 17:41:07.252817   38862 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-683878"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 17:41:07.252844   38862 kube-vip.go:115] generating kube-vip config ...
	I0815 17:41:07.252894   38862 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0815 17:41:07.264990   38862 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0815 17:41:07.265135   38862 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0815 17:41:07.265202   38862 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:41:07.275209   38862 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:41:07.275261   38862 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0815 17:41:07.284845   38862 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0815 17:41:07.303077   38862 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:41:07.321606   38862 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0815 17:41:07.340251   38862 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0815 17:41:07.357392   38862 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0815 17:41:07.362239   38862 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:41:07.504114   38862 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:41:07.519790   38862 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878 for IP: 192.168.39.17
	I0815 17:41:07.519813   38862 certs.go:194] generating shared ca certs ...
	I0815 17:41:07.519832   38862 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:41:07.519984   38862 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 17:41:07.520039   38862 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 17:41:07.520052   38862 certs.go:256] generating profile certs ...
	I0815 17:41:07.520147   38862 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/client.key
	I0815 17:41:07.520180   38862 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.851b4a9f
	I0815 17:41:07.520207   38862 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.851b4a9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.17 192.168.39.232 192.168.39.102 192.168.39.254]
	I0815 17:41:07.662140   38862 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.851b4a9f ...
	I0815 17:41:07.662175   38862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.851b4a9f: {Name:mkc62a4226ba91a3e49d7701fd21f6207f0f0426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:41:07.662356   38862 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.851b4a9f ...
	I0815 17:41:07.662373   38862 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.851b4a9f: {Name:mkdf89b8e447a517bf45b20d7a57fddbe5d2b4e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:41:07.662467   38862 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt.851b4a9f -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt
	I0815 17:41:07.662644   38862 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key.851b4a9f -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key
	I0815 17:41:07.662804   38862 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key
	I0815 17:41:07.662820   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 17:41:07.662838   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 17:41:07.662854   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 17:41:07.662874   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 17:41:07.662893   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 17:41:07.662912   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 17:41:07.662930   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 17:41:07.662948   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 17:41:07.663008   38862 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 17:41:07.663049   38862 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 17:41:07.663062   38862 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 17:41:07.663107   38862 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 17:41:07.663142   38862 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:41:07.663173   38862 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 17:41:07.663226   38862 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 17:41:07.663264   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /usr/share/ca-certificates/202192.pem
	I0815 17:41:07.663286   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:41:07.663304   38862 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem -> /usr/share/ca-certificates/20219.pem
	I0815 17:41:07.663820   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:41:07.693262   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 17:41:07.720390   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:41:07.746946   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 17:41:07.773341   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 17:41:07.799204   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 17:41:07.825823   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:41:07.853422   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/ha-683878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:41:07.880957   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 17:41:07.908346   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:41:07.936051   38862 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 17:41:07.960783   38862 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 17:41:07.976951   38862 ssh_runner.go:195] Run: openssl version
	I0815 17:41:07.982704   38862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 17:41:07.993182   38862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 17:41:07.997597   38862 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 17:41:07.997640   38862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 17:41:08.003245   38862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 17:41:08.013568   38862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 17:41:08.024551   38862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 17:41:08.029187   38862 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 17:41:08.029232   38862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 17:41:08.035099   38862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:41:08.044659   38862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:41:08.055136   38862 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:41:08.059625   38862 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:41:08.059662   38862 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:41:08.065385   38862 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:41:08.074619   38862 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:41:08.079138   38862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 17:41:08.088257   38862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 17:41:08.094041   38862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 17:41:08.099968   38862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 17:41:08.105752   38862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 17:41:08.111008   38862 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 17:41:08.116355   38862 kubeadm.go:392] StartCluster: {Name:ha-683878 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clust
erName:ha-683878 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.17 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.232 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.102 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.105 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:41:08.116513   38862 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 17:41:08.116578   38862 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 17:41:08.162470   38862 cri.go:89] found id: "f7ebf9d70ba5c61efd97508e79777cb3f39e8023f72ce96a5c2e17e64c015b46"
	I0815 17:41:08.162492   38862 cri.go:89] found id: "34d1790e226d4d4f4c8818c4700c96d66e0e17317dcf726dd7bca83a38f2574d"
	I0815 17:41:08.162497   38862 cri.go:89] found id: "43267532bd3a74eae62f14b5e2827a1722979ac5dae14e6ca9695963477cfb01"
	I0815 17:41:08.162502   38862 cri.go:89] found id: "f1cbca2356d05670475331f440acdbf693b96bdd7ab2a56ed7cb561f8a805f60"
	I0815 17:41:08.162506   38862 cri.go:89] found id: "e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e"
	I0815 17:41:08.162510   38862 cri.go:89] found id: "f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b"
	I0815 17:41:08.162514   38862 cri.go:89] found id: "78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480"
	I0815 17:41:08.162518   38862 cri.go:89] found id: "ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018"
	I0815 17:41:08.162522   38862 cri.go:89] found id: "b6c95bb7bfbe2c06a349a370026128c0969e39b88ce22dff5a060a42827c947b"
	I0815 17:41:08.162530   38862 cri.go:89] found id: "4d96eb3cf9f846f9c9ede73f8bbf8503748f3da80a8f919932ebe179f528d25b"
	I0815 17:41:08.162538   38862 cri.go:89] found id: "08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf"
	I0815 17:41:08.162542   38862 cri.go:89] found id: "d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f"
	I0815 17:41:08.162547   38862 cri.go:89] found id: "c6948597165c346c42890f5acaa78b26e33279be966f3dc48009b5d6699203d7"
	I0815 17:41:08.162551   38862 cri.go:89] found id: ""
	I0815 17:41:08.162597   38862 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.885716325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743987885686928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbf6777a-8dd8-4144-9c40-3ac4c8444118 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.886767593Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=41aba1bf-d586-4b3c-9c28-1a44e3488629 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.886864214Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=41aba1bf-d586-4b3c-9c28-1a44e3488629 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.887408105Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7991f99cc40d07ae4f9c6c10cdbbe4e3a2c44440825726548dd7f026cee9734,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723743774500268941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf2d808c645dae451dbe8682b457df0a3f073da398faffc19f22599def3aa8c8,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723743716504780858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c56aa67e4e2659a2eb6e8192b8b15c0490c238133ae3308e5fce281e058966,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723743714483852937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b5aa82aeff3a14d47e07f166dda30a6e1b96a5a598413fe9376287e1b6a852c,PodSandboxId:94417b32e4de91d8ef50c382d0a68b6b5ec3cda89c198729e6348b0f95b17abc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743707745885217,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46a73bdcce113c405d62e05427c26faa8f7ab836f86acd5a2a328dc30ceba75,PodSandboxId:c6a58ea11b976958fb1026bfe0a01c8474e0ad066646167ae5084553b6637fea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723743689297240919,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fa6d6c8257ff26c4035ba26d0d5a23,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd5ff6d7703f9642497550b06256b3eb8fb80a3892ba3ec0c698d9211d02912,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723743674473216689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4070ef99c378d5c5317666f05ce82a75603a4e8866bc82addec8bbec73b6a2ac,PodSandboxId:d09d6d98d32509c845c8ebba33c31e1fd7e86fbe8adda902c31f000ec2f7f050,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674681831745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2199424b0f8cc53a71e3ce0aadbe9cd7e1f69bac844b532d11cfda9f5debc,PodSandboxId:3e5d344a0cd57c86474a3fe1c522e5994d48e36d5fdb0ea67a56599637ce3e2c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723743674582559536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f08b099f496bb8b0a640998cd9a0724cfef6f168fba45b7bc274f8e2ed364c8,PodSandboxId:ad8dd7bbaa72409483ce2bce086fd68549c4224075ef0d94f7cb8a629e790376,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674667621055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10eb11ca402df2d31c60c5ac05592da27f89eac7a3f05847f371cf5d53018bac,PodSandboxId:19c7e4b9a3befcfad6acddb6cfd20c117a2ffe7a92ef4424d298ccc038809323,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723743674464261796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5c6f725a729b9cdbf1c96e63d9550f70855e20bbca143c47210bc88eea46e6,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723743674522632573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a59586f84f7320ad534cf9b8b26ad133299a4dd8af0be1df493985e2d27f1c,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723743674400895715,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78a9b7480fe83c2471c0c52fe754fdd2839373005031ff7aac548567ae98e20,PodSandboxId:33095ed4ba83900508889da7df45947b9ad377c0de1bf12db8a41d0f47dac0b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723743674370189001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7
390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18b204d856602d857da4e7fca7c22c800d964868e9cc8e3f627fd9fc6105f8e,PodSandboxId:ffedef4016532b63cccf05810f275ec9faf9b019133389ec85f7d346fd77677e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723743674350111680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Ann
otations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723743172239149837,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742979212938357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742978669129293,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723742965431489421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723742961580953975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723742950245879075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723742950264883053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=41aba1bf-d586-4b3c-9c28-1a44e3488629 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.930529004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=be0505e6-35ae-4939-ac34-f248b43b86ea name=/runtime.v1.RuntimeService/Version
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.930602331Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=be0505e6-35ae-4939-ac34-f248b43b86ea name=/runtime.v1.RuntimeService/Version
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.932824420Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb4c90e2-c156-4487-89ef-72acd5212c83 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.933281312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743987933257125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb4c90e2-c156-4487-89ef-72acd5212c83 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.933994481Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e1863d7-c7d2-4f4a-8949-24c2e6c876d3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.934322053Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e1863d7-c7d2-4f4a-8949-24c2e6c876d3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.937163377Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7991f99cc40d07ae4f9c6c10cdbbe4e3a2c44440825726548dd7f026cee9734,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723743774500268941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf2d808c645dae451dbe8682b457df0a3f073da398faffc19f22599def3aa8c8,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723743716504780858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c56aa67e4e2659a2eb6e8192b8b15c0490c238133ae3308e5fce281e058966,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723743714483852937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b5aa82aeff3a14d47e07f166dda30a6e1b96a5a598413fe9376287e1b6a852c,PodSandboxId:94417b32e4de91d8ef50c382d0a68b6b5ec3cda89c198729e6348b0f95b17abc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743707745885217,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46a73bdcce113c405d62e05427c26faa8f7ab836f86acd5a2a328dc30ceba75,PodSandboxId:c6a58ea11b976958fb1026bfe0a01c8474e0ad066646167ae5084553b6637fea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723743689297240919,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fa6d6c8257ff26c4035ba26d0d5a23,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd5ff6d7703f9642497550b06256b3eb8fb80a3892ba3ec0c698d9211d02912,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723743674473216689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4070ef99c378d5c5317666f05ce82a75603a4e8866bc82addec8bbec73b6a2ac,PodSandboxId:d09d6d98d32509c845c8ebba33c31e1fd7e86fbe8adda902c31f000ec2f7f050,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674681831745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2199424b0f8cc53a71e3ce0aadbe9cd7e1f69bac844b532d11cfda9f5debc,PodSandboxId:3e5d344a0cd57c86474a3fe1c522e5994d48e36d5fdb0ea67a56599637ce3e2c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723743674582559536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f08b099f496bb8b0a640998cd9a0724cfef6f168fba45b7bc274f8e2ed364c8,PodSandboxId:ad8dd7bbaa72409483ce2bce086fd68549c4224075ef0d94f7cb8a629e790376,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674667621055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10eb11ca402df2d31c60c5ac05592da27f89eac7a3f05847f371cf5d53018bac,PodSandboxId:19c7e4b9a3befcfad6acddb6cfd20c117a2ffe7a92ef4424d298ccc038809323,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723743674464261796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5c6f725a729b9cdbf1c96e63d9550f70855e20bbca143c47210bc88eea46e6,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723743674522632573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a59586f84f7320ad534cf9b8b26ad133299a4dd8af0be1df493985e2d27f1c,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723743674400895715,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78a9b7480fe83c2471c0c52fe754fdd2839373005031ff7aac548567ae98e20,PodSandboxId:33095ed4ba83900508889da7df45947b9ad377c0de1bf12db8a41d0f47dac0b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723743674370189001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7
390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18b204d856602d857da4e7fca7c22c800d964868e9cc8e3f627fd9fc6105f8e,PodSandboxId:ffedef4016532b63cccf05810f275ec9faf9b019133389ec85f7d346fd77677e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723743674350111680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Ann
otations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723743172239149837,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742979212938357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742978669129293,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723742965431489421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723742961580953975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723742950245879075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723742950264883053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e1863d7-c7d2-4f4a-8949-24c2e6c876d3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.986052367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a54abcd6-853c-4ff0-80b0-c64f0e68e5c3 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.986167553Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a54abcd6-853c-4ff0-80b0-c64f0e68e5c3 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.987357660Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bcd32474-a49a-46f5-aaee-a3df9d01b3bb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.988007891Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743987987983590,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bcd32474-a49a-46f5-aaee-a3df9d01b3bb name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.988651728Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c957d33b-a4b6-42d9-8e65-7dfe02511dc4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.988725079Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c957d33b-a4b6-42d9-8e65-7dfe02511dc4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:46:27 ha-683878 crio[3712]: time="2024-08-15 17:46:27.989139376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7991f99cc40d07ae4f9c6c10cdbbe4e3a2c44440825726548dd7f026cee9734,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723743774500268941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf2d808c645dae451dbe8682b457df0a3f073da398faffc19f22599def3aa8c8,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723743716504780858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c56aa67e4e2659a2eb6e8192b8b15c0490c238133ae3308e5fce281e058966,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723743714483852937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b5aa82aeff3a14d47e07f166dda30a6e1b96a5a598413fe9376287e1b6a852c,PodSandboxId:94417b32e4de91d8ef50c382d0a68b6b5ec3cda89c198729e6348b0f95b17abc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743707745885217,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46a73bdcce113c405d62e05427c26faa8f7ab836f86acd5a2a328dc30ceba75,PodSandboxId:c6a58ea11b976958fb1026bfe0a01c8474e0ad066646167ae5084553b6637fea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723743689297240919,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fa6d6c8257ff26c4035ba26d0d5a23,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd5ff6d7703f9642497550b06256b3eb8fb80a3892ba3ec0c698d9211d02912,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723743674473216689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4070ef99c378d5c5317666f05ce82a75603a4e8866bc82addec8bbec73b6a2ac,PodSandboxId:d09d6d98d32509c845c8ebba33c31e1fd7e86fbe8adda902c31f000ec2f7f050,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674681831745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2199424b0f8cc53a71e3ce0aadbe9cd7e1f69bac844b532d11cfda9f5debc,PodSandboxId:3e5d344a0cd57c86474a3fe1c522e5994d48e36d5fdb0ea67a56599637ce3e2c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723743674582559536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f08b099f496bb8b0a640998cd9a0724cfef6f168fba45b7bc274f8e2ed364c8,PodSandboxId:ad8dd7bbaa72409483ce2bce086fd68549c4224075ef0d94f7cb8a629e790376,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674667621055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10eb11ca402df2d31c60c5ac05592da27f89eac7a3f05847f371cf5d53018bac,PodSandboxId:19c7e4b9a3befcfad6acddb6cfd20c117a2ffe7a92ef4424d298ccc038809323,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723743674464261796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5c6f725a729b9cdbf1c96e63d9550f70855e20bbca143c47210bc88eea46e6,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723743674522632573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a59586f84f7320ad534cf9b8b26ad133299a4dd8af0be1df493985e2d27f1c,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723743674400895715,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78a9b7480fe83c2471c0c52fe754fdd2839373005031ff7aac548567ae98e20,PodSandboxId:33095ed4ba83900508889da7df45947b9ad377c0de1bf12db8a41d0f47dac0b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723743674370189001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7
390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18b204d856602d857da4e7fca7c22c800d964868e9cc8e3f627fd9fc6105f8e,PodSandboxId:ffedef4016532b63cccf05810f275ec9faf9b019133389ec85f7d346fd77677e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723743674350111680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Ann
otations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723743172239149837,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742979212938357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742978669129293,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723742965431489421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723742961580953975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723742950245879075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723742950264883053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c957d33b-a4b6-42d9-8e65-7dfe02511dc4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:46:28 ha-683878 crio[3712]: time="2024-08-15 17:46:28.039977537Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=deb9ee30-8f36-4e97-aeb6-731791a568a8 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:46:28 ha-683878 crio[3712]: time="2024-08-15 17:46:28.040101644Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=deb9ee30-8f36-4e97-aeb6-731791a568a8 name=/runtime.v1.RuntimeService/Version
	Aug 15 17:46:28 ha-683878 crio[3712]: time="2024-08-15 17:46:28.041292823Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4cc55fdf-0a19-4346-8b2f-8179c0ed554c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:46:28 ha-683878 crio[3712]: time="2024-08-15 17:46:28.041909169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743988041880616,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4cc55fdf-0a19-4346-8b2f-8179c0ed554c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 17:46:28 ha-683878 crio[3712]: time="2024-08-15 17:46:28.042714637Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f938cdf-160b-48ca-895f-9a4eaa73ecd2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:46:28 ha-683878 crio[3712]: time="2024-08-15 17:46:28.042800696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f938cdf-160b-48ca-895f-9a4eaa73ecd2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 17:46:28 ha-683878 crio[3712]: time="2024-08-15 17:46:28.043249992Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e7991f99cc40d07ae4f9c6c10cdbbe4e3a2c44440825726548dd7f026cee9734,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723743774500268941,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 5,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf2d808c645dae451dbe8682b457df0a3f073da398faffc19f22599def3aa8c8,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723743716504780858,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24c56aa67e4e2659a2eb6e8192b8b15c0490c238133ae3308e5fce281e058966,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723743714483852937,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1b5aa82aeff3a14d47e07f166dda30a6e1b96a5a598413fe9376287e1b6a852c,PodSandboxId:94417b32e4de91d8ef50c382d0a68b6b5ec3cda89c198729e6348b0f95b17abc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723743707745885217,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f46a73bdcce113c405d62e05427c26faa8f7ab836f86acd5a2a328dc30ceba75,PodSandboxId:c6a58ea11b976958fb1026bfe0a01c8474e0ad066646167ae5084553b6637fea,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1723743689297240919,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33fa6d6c8257ff26c4035ba26d0d5a23,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd5ff6d7703f9642497550b06256b3eb8fb80a3892ba3ec0c698d9211d02912,PodSandboxId:98ceb7eec453d45471ab51180a448422f396c577b2e2a0b2749014e795c22905,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723743674473216689,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78d884cc-a5c3-4f94-b643-b6593cb3f622,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4070ef99c378d5c5317666f05ce82a75603a4e8866bc82addec8bbec73b6a2ac,PodSandboxId:d09d6d98d32509c845c8ebba33c31e1fd7e86fbe8adda902c31f000ec2f7f050,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674681831745,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9
153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ab2199424b0f8cc53a71e3ce0aadbe9cd7e1f69bac844b532d11cfda9f5debc,PodSandboxId:3e5d344a0cd57c86474a3fe1c522e5994d48e36d5fdb0ea67a56599637ce3e2c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723743674582559536,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f08b099f496bb8b0a640998cd9a0724cfef6f168fba45b7bc274f8e2ed364c8,PodSandboxId:ad8dd7bbaa72409483ce2bce086fd68549c4224075ef0d94f7cb8a629e790376,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723743674667621055,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"con
tainerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:10eb11ca402df2d31c60c5ac05592da27f89eac7a3f05847f371cf5d53018bac,PodSandboxId:19c7e4b9a3befcfad6acddb6cfd20c117a2ffe7a92ef4424d298ccc038809323,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723743674464261796,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d5c6f725a729b9cdbf1c96e63d9550f70855e20bbca143c47210bc88eea46e6,PodSandboxId:3437fd59bb98e922b0e37a8dad085055e36a2e309b401e3c9fa089b7423af42a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723743674522632573,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-683878,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 1ec6ea2e6b66134608615076611d4422,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74a59586f84f7320ad534cf9b8b26ad133299a4dd8af0be1df493985e2d27f1c,PodSandboxId:fec6bf06ea55949144fe93c21d136ac092687c09329bc08f48f69db24692ebab,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723743674400895715,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 851d14d5b04b12dccb38d8220a38dbf7,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78a9b7480fe83c2471c0c52fe754fdd2839373005031ff7aac548567ae98e20,PodSandboxId:33095ed4ba83900508889da7df45947b9ad377c0de1bf12db8a41d0f47dac0b2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723743674370189001,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7
390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d18b204d856602d857da4e7fca7c22c800d964868e9cc8e3f627fd9fc6105f8e,PodSandboxId:ffedef4016532b63cccf05810f275ec9faf9b019133389ec85f7d346fd77677e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723743674350111680,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Ann
otations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c22e0c68e353df52f29fd661a375d8153486c8d6f6187447b14f410a02b3a0a7,PodSandboxId:a48e946a0189add54664b726c3eaba516f3f27768279e115dc1eb6bd988fc904,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723743172239149837,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-lgsr4,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 17ac3df7-c2a0-40b5-b107-ab6a7a0417af,},Annota
tions:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e,PodSandboxId:96be386135521c8dcb8ba09b3c977c1463368daf38646da8ad7ae128e22ca750,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742979212938357,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-c5mlj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 24146559-ea1d-42db-9f61-730ed436dea8,},Annotations:map[string]string{io.kuber
netes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b,PodSandboxId:d330a801db93bc917091b3c917665e492e05d786f5d3daa14a7a8b935f5473eb,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723742978669129293,Labels:map[string]string{io.kubernetes.container.name: core
dns,io.kubernetes.pod.name: coredns-6f6b679f8f-kfczp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d18cfeb-ccfe-4432-b999-510d84438c7a,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480,PodSandboxId:64e069f270f021e01d4642ff6a9219a8921f0bbe8fb88c7985119e42c248e13a,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723742965431489421,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-g8lqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bbd49ca1-0f88-45ca-8fd5-dbb47d571c1e,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018,PodSandboxId:209398e9569b4f2a35394b4813367aee77c80e4738adab579905a3c26c34fd4f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeH
andler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723742961580953975,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-s9hw4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5aecbbe-7a68-4e05-a3a6-d9f00d78dcb1,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f,PodSandboxId:a0ca28e1760aabde9428e55cc3b15a6274702937c7de636ff756e890b2e4d2f2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f
0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723742950245879075,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39f7390d1bf7da73874e9af0a17b36c,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf,PodSandboxId:b48feabdecceee8b33691661c56e7aa9cda062f3dddc02860034e4fc61622118,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e6
61e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723742950264883053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-683878,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 589cddf02c2fe63fd30bfcac06f62665,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f938cdf-160b-48ca-895f-9a4eaa73ecd2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e7991f99cc40d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       5                   98ceb7eec453d       storage-provisioner
	cf2d808c645da       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   2                   3437fd59bb98e       kube-controller-manager-ha-683878
	24c56aa67e4e2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            3                   fec6bf06ea559       kube-apiserver-ha-683878
	1b5aa82aeff3a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   94417b32e4de9       busybox-7dff88458-lgsr4
	f46a73bdcce11       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   c6a58ea11b976       kube-vip-ha-683878
	4070ef99c378d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   d09d6d98d3250       coredns-6f6b679f8f-kfczp
	2f08b099f496b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   ad8dd7bbaa724       coredns-6f6b679f8f-c5mlj
	9ab2199424b0f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   3e5d344a0cd57       etcd-ha-683878
	5d5c6f725a729       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      5 minutes ago       Exited              kube-controller-manager   1                   3437fd59bb98e       kube-controller-manager-ha-683878
	5fd5ff6d7703f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       4                   98ceb7eec453d       storage-provisioner
	10eb11ca402df       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   19c7e4b9a3bef       kindnet-g8lqf
	74a59586f84f7       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      5 minutes ago       Exited              kube-apiserver            2                   fec6bf06ea559       kube-apiserver-ha-683878
	f78a9b7480fe8       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      5 minutes ago       Running             kube-scheduler            1                   33095ed4ba839       kube-scheduler-ha-683878
	d18b204d85660       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      5 minutes ago       Running             kube-proxy                1                   ffedef4016532       kube-proxy-s9hw4
	c22e0c68e353d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   a48e946a0189a       busybox-7dff88458-lgsr4
	e2d856610b1da       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   96be386135521       coredns-6f6b679f8f-c5mlj
	f085f1327c68a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   d330a801db93b       coredns-6f6b679f8f-kfczp
	78d6dea2ba166       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    17 minutes ago      Exited              kindnet-cni               0                   64e069f270f02       kindnet-g8lqf
	ea81ebf55447c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      17 minutes ago      Exited              kube-proxy                0                   209398e9569b4       kube-proxy-s9hw4
	08adcf281be8a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      17 minutes ago      Exited              etcd                      0                   b48feabdeccee       etcd-ha-683878
	d9b5d872cbe2c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      17 minutes ago      Exited              kube-scheduler            0                   a0ca28e1760aa       kube-scheduler-ha-683878
	
	
	==> coredns [2f08b099f496bb8b0a640998cd9a0724cfef6f168fba45b7bc274f8e2ed364c8] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40702->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1394359595]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 17:41:26.298) (total time: 10926ms):
	Trace[1394359595]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40702->10.96.0.1:443: read: connection reset by peer 10926ms (17:41:37.225)
	Trace[1394359595]: [10.926910792s] [10.926910792s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40702->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40726->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:40726->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [4070ef99c378d5c5317666f05ce82a75603a4e8866bc82addec8bbec73b6a2ac] <==
	Trace[1447686097]: [10.6493328s] [10.6493328s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34312->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34324->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[806533450]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 17:41:26.786) (total time: 10439ms):
	Trace[806533450]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34324->10.96.0.1:443: read: connection reset by peer 10439ms (17:41:37.225)
	Trace[806533450]: [10.439430176s] [10.439430176s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:34324->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e2d856610b1da6515d7d43cc72bf72dd64b55c21ebd3b779eb8e3578387ee60e] <==
	[INFO] 10.244.1.2:33661 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00022092s
	[INFO] 10.244.0.4:37543 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001586797s
	[INFO] 10.244.0.4:39767 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147698s
	[INFO] 10.244.0.4:56644 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00111781s
	[INFO] 10.244.0.4:57862 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000081256s
	[INFO] 10.244.2.2:39974 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001814889s
	[INFO] 10.244.2.2:60048 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001073479s
	[INFO] 10.244.2.2:59792 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000116437s
	[INFO] 10.244.2.2:60453 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162311s
	[INFO] 10.244.2.2:38063 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074865s
	[INFO] 10.244.1.2:49382 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204795s
	[INFO] 10.244.0.4:49451 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020076s
	[INFO] 10.244.0.4:36025 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000090742s
	[INFO] 10.244.1.2:40041 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120543s
	[INFO] 10.244.1.2:44246 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135148s
	[INFO] 10.244.1.2:49551 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109408s
	[INFO] 10.244.0.4:54048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000242835s
	[INFO] 10.244.0.4:58043 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000114208s
	[INFO] 10.244.0.4:57821 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00014893s
	[INFO] 10.244.0.4:60055 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059928s
	[INFO] 10.244.2.2:59967 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188473s
	[INFO] 10.244.2.2:46929 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000173466s
	[INFO] 10.244.2.2:40321 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000103061s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f085f1327c68ac5b2c4928f08ae2e67e222463546d341d89836b291342f1417b] <==
	[INFO] 10.244.1.2:57120 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172831s
	[INFO] 10.244.1.2:55849 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.014643038s
	[INFO] 10.244.1.2:47083 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161478s
	[INFO] 10.244.1.2:45144 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000142497s
	[INFO] 10.244.1.2:41019 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147233s
	[INFO] 10.244.0.4:50547 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154587s
	[INFO] 10.244.0.4:60786 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00018138s
	[INFO] 10.244.0.4:51598 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011869s
	[INFO] 10.244.0.4:59583 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005686s
	[INFO] 10.244.2.2:47444 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121752s
	[INFO] 10.244.2.2:46973 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092024s
	[INFO] 10.244.2.2:42492 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000092653s
	[INFO] 10.244.1.2:38440 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00026281s
	[INFO] 10.244.1.2:50999 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076764s
	[INFO] 10.244.1.2:46163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107061s
	[INFO] 10.244.0.4:36567 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000099261s
	[INFO] 10.244.0.4:51415 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079336s
	[INFO] 10.244.2.2:33646 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132168s
	[INFO] 10.244.2.2:41707 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123477s
	[INFO] 10.244.2.2:46838 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000090831s
	[INFO] 10.244.2.2:46347 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071615s
	[INFO] 10.244.1.2:58233 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222961s
	[INFO] 10.244.2.2:37537 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108341s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-683878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T17_29_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:29:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:46:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:41:55 +0000   Thu, 15 Aug 2024 17:29:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:41:55 +0000   Thu, 15 Aug 2024 17:29:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:41:55 +0000   Thu, 15 Aug 2024 17:29:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:41:55 +0000   Thu, 15 Aug 2024 17:29:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.17
	  Hostname:    ha-683878
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fae4a08d40d64f788bfe5305cfe9e22b
	  System UUID:                fae4a08d-40d6-4f78-8bfe-5305cfe9e22b
	  Boot ID:                    a20b912d-dbbf-42f1-bb62-642f6b4f28ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lgsr4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-6f6b679f8f-c5mlj             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-6f6b679f8f-kfczp             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-ha-683878                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kindnet-g8lqf                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	  kube-system                 kube-apiserver-ha-683878             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-ha-683878    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-s9hw4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-ha-683878             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-vip-ha-683878                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m55s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m31s                  kube-proxy       
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  17m                    kubelet          Node ha-683878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     17m                    kubelet          Node ha-683878 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    17m                    kubelet          Node ha-683878 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           17m                    node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-683878 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Warning  ContainerGCFailed        6m12s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m35s (x3 over 6m24s)  kubelet          Node ha-683878 status is now: NodeNotReady
	  Normal   RegisteredNode           4m36s                  node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Normal   RegisteredNode           4m28s                  node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-683878 event: Registered Node ha-683878 in Controller
	
	
	Name:               ha-683878-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T17_31_05_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:31:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:46:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:42:39 +0000   Thu, 15 Aug 2024 17:41:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:42:39 +0000   Thu, 15 Aug 2024 17:41:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:42:39 +0000   Thu, 15 Aug 2024 17:41:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:42:39 +0000   Thu, 15 Aug 2024 17:41:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.232
	  Hostname:    ha-683878-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8f7afa772a5e433884c57e372a6611cf
	  System UUID:                8f7afa77-2a5e-4338-84c5-7e372a6611cf
	  Boot ID:                    412a74de-bdfe-4d63-9208-6db0cac96729
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j8h8r                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-683878-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-z5z9h                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-683878-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-683878-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-89p4v                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-683878-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-683878-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m1s                   kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-683878-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-683878-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-683878-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                    node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  RegisteredNode           15m                    node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-683878-m02 status is now: NodeNotReady
	  Normal  Starting                 4m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m58s (x8 over 4m58s)  kubelet          Node ha-683878-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s (x8 over 4m58s)  kubelet          Node ha-683878-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s (x7 over 4m58s)  kubelet          Node ha-683878-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m36s                  node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-683878-m02 event: Registered Node ha-683878-m02 in Controller
	
	
	Name:               ha-683878-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-683878-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=ha-683878
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T17_33_27_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:33:26 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-683878-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:44:01 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 17:43:41 +0000   Thu, 15 Aug 2024 17:44:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 17:43:41 +0000   Thu, 15 Aug 2024 17:44:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 17:43:41 +0000   Thu, 15 Aug 2024 17:44:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 17:43:41 +0000   Thu, 15 Aug 2024 17:44:42 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.105
	  Hostname:    ha-683878-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a40a481bcbcc4fd6871392be97e352cc
	  System UUID:                a40a481b-cbcc-4fd6-8713-92be97e352cc
	  Boot ID:                    caa91bce-e6d9-47c8-afcb-4be75bb819d5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-cxxt4    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kindnet-hmfn7              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      13m
	  kube-system                 kube-proxy-8clcw           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-683878-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-683878-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-683878-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-683878-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m36s                  node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal   RegisteredNode           4m28s                  node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Normal   RegisteredNode           3m17s                  node-controller  Node ha-683878-m04 event: Registered Node ha-683878-m04 in Controller
	  Warning  Rebooted                 2m47s                  kubelet          Node ha-683878-m04 has been rebooted, boot id: caa91bce-e6d9-47c8-afcb-4be75bb819d5
	  Normal   NodeAllocatableEnforced  2m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m47s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m47s (x2 over 2m47s)  kubelet          Node ha-683878-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m47s (x2 over 2m47s)  kubelet          Node ha-683878-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m47s (x2 over 2m47s)  kubelet          Node ha-683878-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                2m47s                  kubelet          Node ha-683878-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 3m56s)   node-controller  Node ha-683878-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.632720] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.064329] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054606] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[Aug15 17:29] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.110126] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.269301] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.960612] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.119022] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +0.056299] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.075028] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.095571] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.103797] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.010085] kauditd_printk_skb: 34 callbacks suppressed
	[ +22.762994] kauditd_printk_skb: 26 callbacks suppressed
	[Aug15 17:37] kauditd_printk_skb: 1 callbacks suppressed
	[Aug15 17:41] systemd-fstab-generator[3632]: Ignoring "noauto" option for root device
	[  +0.151940] systemd-fstab-generator[3644]: Ignoring "noauto" option for root device
	[  +0.175057] systemd-fstab-generator[3658]: Ignoring "noauto" option for root device
	[  +0.149888] systemd-fstab-generator[3670]: Ignoring "noauto" option for root device
	[  +0.285757] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
	[  +2.121327] systemd-fstab-generator[3800]: Ignoring "noauto" option for root device
	[  +6.531137] kauditd_printk_skb: 122 callbacks suppressed
	[ +12.353045] kauditd_printk_skb: 85 callbacks suppressed
	[ +10.054114] kauditd_printk_skb: 1 callbacks suppressed
	[Aug15 17:42] kauditd_printk_skb: 4 callbacks suppressed
	
	
	==> etcd [08adcf281be8a19e3d03327c4c98f85e3db53ca9fa8121b0fb7e87d43f578cbf] <==
	{"level":"warn","ts":"2024-08-15T17:39:33.118851Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"66440227808963d1","rtt":"8.741609ms","error":"dial tcp 192.168.39.232:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-15T17:39:33.119023Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"66440227808963d1","rtt":"1.130813ms","error":"dial tcp 192.168.39.232:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-08-15T17:39:33.243529Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.17:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T17:39:33.243721Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.17:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T17:39:33.243847Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"2212c0bfe49c9415","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-15T17:39:33.244042Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244085Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244129Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244242Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244297Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244349Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2212c0bfe49c9415","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244378Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"66440227808963d1"}
	{"level":"info","ts":"2024-08-15T17:39:33.244415Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.244532Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.244604Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.244732Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.244785Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.244834Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.244938Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:39:33.248167Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"warn","ts":"2024-08-15T17:39:33.248279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.158679556s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-08-15T17:39:33.248321Z","caller":"traceutil/trace.go:171","msg":"trace[953703048] range","detail":"{range_begin:; range_end:; }","duration":"9.158740682s","start":"2024-08-15T17:39:24.089572Z","end":"2024-08-15T17:39:33.248313Z","steps":["trace[953703048] 'agreement among raft nodes before linearized reading'  (duration: 9.158678025s)"],"step_count":1}
	{"level":"error","ts":"2024-08-15T17:39:33.248352Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-08-15T17:39:33.248597Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.17:2380"}
	{"level":"info","ts":"2024-08-15T17:39:33.248609Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-683878","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.17:2380"],"advertise-client-urls":["https://192.168.39.17:2379"]}
	
	
	==> etcd [9ab2199424b0f8cc53a71e3ce0aadbe9cd7e1f69bac844b532d11cfda9f5debc] <==
	{"level":"info","ts":"2024-08-15T17:43:01.688525Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:43:01.688595Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:43:01.708005Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:43:01.716173Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2212c0bfe49c9415","to":"46deb178e6549eb8","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-15T17:43:01.716305Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:43:01.729275Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"2212c0bfe49c9415","to":"46deb178e6549eb8","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-15T17:43:01.729334Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"warn","ts":"2024-08-15T17:43:54.932209Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.39.102:47132","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-15T17:43:54.942327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2212c0bfe49c9415 switched to configuration voters=(2455236677277094933 7369017258968441809)"}
	{"level":"info","ts":"2024-08-15T17:43:54.944488Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"3ecd98d5111bce24","local-member-id":"2212c0bfe49c9415","removed-remote-peer-id":"46deb178e6549eb8","removed-remote-peer-urls":["https://192.168.39.102:2380"]}
	{"level":"info","ts":"2024-08-15T17:43:54.944599Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"46deb178e6549eb8"}
	{"level":"warn","ts":"2024-08-15T17:43:54.944849Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:43:54.944897Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"46deb178e6549eb8"}
	{"level":"warn","ts":"2024-08-15T17:43:54.945266Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:43:54.945295Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:43:54.945572Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"warn","ts":"2024-08-15T17:43:54.945862Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8","error":"context canceled"}
	{"level":"warn","ts":"2024-08-15T17:43:54.945912Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"46deb178e6549eb8","error":"failed to read 46deb178e6549eb8 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-15T17:43:54.945940Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"warn","ts":"2024-08-15T17:43:54.946232Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-08-15T17:43:54.946256Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"2212c0bfe49c9415","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:43:54.946276Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"46deb178e6549eb8"}
	{"level":"info","ts":"2024-08-15T17:43:54.946289Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"2212c0bfe49c9415","removed-remote-peer-id":"46deb178e6549eb8"}
	{"level":"warn","ts":"2024-08-15T17:43:54.956091Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"2212c0bfe49c9415","remote-peer-id-stream-handler":"2212c0bfe49c9415","remote-peer-id-from":"46deb178e6549eb8"}
	{"level":"warn","ts":"2024-08-15T17:43:54.959855Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"2212c0bfe49c9415","remote-peer-id-stream-handler":"2212c0bfe49c9415","remote-peer-id-from":"46deb178e6549eb8"}
	
	
	==> kernel <==
	 17:46:28 up 17 min,  0 users,  load average: 0.06, 0.24, 0.25
	Linux ha-683878 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [10eb11ca402df2d31c60c5ac05592da27f89eac7a3f05847f371cf5d53018bac] <==
	I0815 17:45:45.735762       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:45:55.736348       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:45:55.736391       1 main.go:299] handling current node
	I0815 17:45:55.736405       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:45:55.736412       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:45:55.736585       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:45:55.736611       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:46:05.733324       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:46:05.733578       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:46:05.733764       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:46:05.733794       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:46:05.733877       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:46:05.733897       1 main.go:299] handling current node
	I0815 17:46:15.728935       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:46:15.728993       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:46:15.729136       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:46:15.729159       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:46:15.729217       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:46:15.729240       1 main.go:299] handling current node
	I0815 17:46:25.738001       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:46:25.738362       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:46:25.738634       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:46:25.738667       1 main.go:299] handling current node
	I0815 17:46:25.738705       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:46:25.738721       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [78d6dea2ba1667b2d3ef1fa6d58a9cfceed152c787670ffec6a14515c2187480] <==
	I0815 17:38:56.704382       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:39:06.704641       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:39:06.704700       1 main.go:299] handling current node
	I0815 17:39:06.704714       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:39:06.704720       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:39:06.704834       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:39:06.704857       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:39:06.704945       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:39:06.704967       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:39:16.708192       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:39:16.708234       1 main.go:299] handling current node
	I0815 17:39:16.708254       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:39:16.708260       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:39:16.708435       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:39:16.708515       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:39:16.708602       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:39:16.708622       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	I0815 17:39:26.704518       1 main.go:295] Handling node with IPs: map[192.168.39.17:{}]
	I0815 17:39:26.704546       1 main.go:299] handling current node
	I0815 17:39:26.704560       1 main.go:295] Handling node with IPs: map[192.168.39.232:{}]
	I0815 17:39:26.704564       1 main.go:322] Node ha-683878-m02 has CIDR [10.244.1.0/24] 
	I0815 17:39:26.704697       1 main.go:295] Handling node with IPs: map[192.168.39.102:{}]
	I0815 17:39:26.704712       1 main.go:322] Node ha-683878-m03 has CIDR [10.244.2.0/24] 
	I0815 17:39:26.704791       1 main.go:295] Handling node with IPs: map[192.168.39.105:{}]
	I0815 17:39:26.704796       1 main.go:322] Node ha-683878-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [24c56aa67e4e2659a2eb6e8192b8b15c0490c238133ae3308e5fce281e058966] <==
	I0815 17:41:56.819643       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 17:41:56.820099       1 aggregator.go:171] initial CRD sync complete...
	I0815 17:41:56.820164       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 17:41:56.820206       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 17:41:56.870770       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 17:41:56.874116       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 17:41:56.878163       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 17:41:56.878224       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 17:41:56.883063       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 17:41:56.883378       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 17:41:56.908792       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 17:41:56.908825       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 17:41:56.915850       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 17:41:56.916025       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 17:41:56.916056       1 policy_source.go:224] refreshing policies
	I0815 17:41:56.929650       1 cache.go:39] Caches are synced for autoregister controller
	W0815 17:41:56.943133       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.232]
	I0815 17:41:56.946101       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 17:41:56.962540       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0815 17:41:56.970100       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0815 17:41:56.971224       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 17:41:57.683852       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0815 17:41:58.090277       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.102 192.168.39.17 192.168.39.232]
	W0815 17:42:08.089204       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.232]
	W0815 17:44:08.101854       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.17 192.168.39.232]
	
	
	==> kube-apiserver [74a59586f84f7320ad534cf9b8b26ad133299a4dd8af0be1df493985e2d27f1c] <==
	I0815 17:41:15.010787       1 options.go:228] external host was not specified, using 192.168.39.17
	I0815 17:41:15.018644       1 server.go:142] Version: v1.31.0
	I0815 17:41:15.018744       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:41:15.990517       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0815 17:41:16.023623       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 17:41:16.032189       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 17:41:16.032264       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 17:41:16.032539       1 instance.go:232] Using reconciler: lease
	W0815 17:41:35.987893       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0815 17:41:35.987974       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0815 17:41:36.033607       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0815 17:41:36.033612       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [5d5c6f725a729b9cdbf1c96e63d9550f70855e20bbca143c47210bc88eea46e6] <==
	I0815 17:41:15.934850       1 serving.go:386] Generated self-signed cert in-memory
	I0815 17:41:16.922598       1 controllermanager.go:197] "Starting" version="v1.31.0"
	I0815 17:41:16.922638       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:41:16.924205       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0815 17:41:16.924603       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 17:41:16.924737       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 17:41:16.924865       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0815 17:41:37.039118       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.17:8443/healthz\": dial tcp 192.168.39.17:8443: connect: connection refused"
	
	
	==> kube-controller-manager [cf2d808c645dae451dbe8682b457df0a3f073da398faffc19f22599def3aa8c8] <==
	I0815 17:43:53.093397       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="54.44µs"
	I0815 17:43:53.726289       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.081µs"
	I0815 17:43:54.578421       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.774µs"
	I0815 17:43:54.582790       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.478µs"
	I0815 17:43:57.052141       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="9.882357ms"
	I0815 17:43:57.053404       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="47.305µs"
	I0815 17:44:05.818835       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-683878-m04"
	I0815 17:44:05.818959       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m03"
	E0815 17:44:05.885571       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-683878-m03\", UID:\"970cec04-a175-4acd-b47d-d284de413eed\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}
, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-683878-m03\", UID:\"0b5b72ac-cec7-4764-ab15-4d9c38060d2d\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-683878-m03\" not found" logger="UnhandledError"
	E0815 17:44:20.080937       1 gc_controller.go:151] "Failed to get node" err="node \"ha-683878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683878-m03"
	E0815 17:44:20.081031       1 gc_controller.go:151] "Failed to get node" err="node \"ha-683878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683878-m03"
	E0815 17:44:20.081038       1 gc_controller.go:151] "Failed to get node" err="node \"ha-683878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683878-m03"
	E0815 17:44:20.081044       1 gc_controller.go:151] "Failed to get node" err="node \"ha-683878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683878-m03"
	E0815 17:44:20.081057       1 gc_controller.go:151] "Failed to get node" err="node \"ha-683878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683878-m03"
	E0815 17:44:40.082092       1 gc_controller.go:151] "Failed to get node" err="node \"ha-683878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683878-m03"
	E0815 17:44:40.082165       1 gc_controller.go:151] "Failed to get node" err="node \"ha-683878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683878-m03"
	E0815 17:44:40.082173       1 gc_controller.go:151] "Failed to get node" err="node \"ha-683878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683878-m03"
	E0815 17:44:40.082192       1 gc_controller.go:151] "Failed to get node" err="node \"ha-683878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683878-m03"
	E0815 17:44:40.082198       1 gc_controller.go:151] "Failed to get node" err="node \"ha-683878-m03\" not found" logger="pod-garbage-collector-controller" node="ha-683878-m03"
	I0815 17:44:42.739748       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:44:42.763999       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:44:42.824186       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="12.169432ms"
	I0815 17:44:42.824584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="71.625µs"
	I0815 17:44:45.338019       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	I0815 17:44:47.878152       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-683878-m04"
	
	
	==> kube-proxy [d18b204d856602d857da4e7fca7c22c800d964868e9cc8e3f627fd9fc6105f8e] <==
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 17:41:16.876685       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 17:41:19.945360       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 17:41:23.017716       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 17:41:29.160922       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0815 17:41:38.376920       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-683878\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0815 17:41:56.672726       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.17"]
	E0815 17:41:56.677597       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:41:57.031258       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 17:41:57.031311       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 17:41:57.031340       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:41:57.037841       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:41:57.038148       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:41:57.038179       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:41:57.039896       1 config.go:197] "Starting service config controller"
	I0815 17:41:57.039954       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:41:57.039992       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:41:57.039997       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:41:57.040828       1 config.go:326] "Starting node config controller"
	I0815 17:41:57.040857       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:41:57.142031       1 shared_informer.go:320] Caches are synced for node config
	I0815 17:41:57.146594       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 17:41:57.146713       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [ea81ebf55447c4610364b6bbd8a20451f669d57f9a29be08da0d4a8a39bde018] <==
	E0815 17:38:21.769509       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:24.840773       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:24.840865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:24.840773       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:24.840907       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:27.915283       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:27.915371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:30.986926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:30.987522       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:30.987361       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:30.987810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:34.059296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:34.059406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:40.202841       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:40.202932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:43.273041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:43.273137       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:43.273265       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:43.273286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:38:58.633957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:38:58.634321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1948\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:39:04.778741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:39:04.779616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1918\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0815 17:39:10.923319       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972": dial tcp 192.168.39.254:8443: connect: no route to host
	E0815 17:39:10.923519       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-683878&resourceVersion=1972\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [d9b5d872cbe2c529b6d05e6aea1a994166109f9df19645f725edfcdca7969a3f] <==
	I0815 17:32:48.191899       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lgsr4" node="ha-683878"
	E0815 17:33:26.612943       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dzspw\": pod kube-proxy-dzspw is already assigned to node \"ha-683878-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dzspw" node="ha-683878-m04"
	E0815 17:33:26.613188       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod eb8dfa16-0d1d-4ff8-8692-4268881e44c8(kube-system/kube-proxy-dzspw) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-dzspw"
	E0815 17:33:26.613271       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dzspw\": pod kube-proxy-dzspw is already assigned to node \"ha-683878-m04\"" pod="kube-system/kube-proxy-dzspw"
	I0815 17:33:26.613349       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-dzspw" node="ha-683878-m04"
	E0815 17:33:26.634591       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hmfn7\": pod kindnet-hmfn7 is already assigned to node \"ha-683878-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-hmfn7" node="ha-683878-m04"
	E0815 17:33:26.637167       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e58e4f5f-3ee5-4fa8-87c8-6caf24492efa(kube-system/kindnet-hmfn7) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-hmfn7"
	E0815 17:33:26.637925       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hmfn7\": pod kindnet-hmfn7 is already assigned to node \"ha-683878-m04\"" pod="kube-system/kindnet-hmfn7"
	I0815 17:33:26.638049       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hmfn7" node="ha-683878-m04"
	E0815 17:39:23.055696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0815 17:39:23.781930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0815 17:39:24.437850       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0815 17:39:25.753215       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0815 17:39:25.811419       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0815 17:39:25.811713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0815 17:39:26.295620       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)" logger="UnhandledError"
	E0815 17:39:28.331511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0815 17:39:28.432167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0815 17:39:30.019565       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)" logger="UnhandledError"
	E0815 17:39:31.188017       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0815 17:39:32.239768       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	I0815 17:39:32.931166       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0815 17:39:32.931721       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0815 17:39:32.932054       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0815 17:39:32.942197       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f78a9b7480fe83c2471c0c52fe754fdd2839373005031ff7aac548567ae98e20] <==
	W0815 17:41:47.373951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.17:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:47.374063       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.17:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:47.434932       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.17:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:47.435053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.17:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:51.865310       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.17:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:51.865371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.17:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:52.004795       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.17:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:52.004976       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.17:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:52.126634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.17:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:52.126835       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.17:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:53.337654       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.17:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:53.337742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.39.17:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:53.717568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.39.17:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.17:8443: connect: connection refused
	E0815 17:41:53.717701       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get \"https://192.168.39.17:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.39.17:8443: connect: connection refused" logger="UnhandledError"
	W0815 17:41:56.732149       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 17:41:56.732204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:41:56.732346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 17:41:56.732378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 17:41:56.734836       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 17:41:56.734886       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:41:56.734950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 17:41:56.734983       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:41:56.735037       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 17:41:56.735065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 17:41:58.949992       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 17:45:16 ha-683878 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 17:45:16 ha-683878 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 17:45:16 ha-683878 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 17:45:16 ha-683878 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 17:45:16 ha-683878 kubelet[1316]: E0815 17:45:16.754563    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743916754217282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:45:16 ha-683878 kubelet[1316]: E0815 17:45:16.754612    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743916754217282,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:45:26 ha-683878 kubelet[1316]: E0815 17:45:26.756080    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743926755860777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:45:26 ha-683878 kubelet[1316]: E0815 17:45:26.756107    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743926755860777,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:45:36 ha-683878 kubelet[1316]: E0815 17:45:36.765055    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743936764392603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:45:36 ha-683878 kubelet[1316]: E0815 17:45:36.765112    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743936764392603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:45:46 ha-683878 kubelet[1316]: E0815 17:45:46.767272    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743946766749447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:45:46 ha-683878 kubelet[1316]: E0815 17:45:46.767312    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743946766749447,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:45:56 ha-683878 kubelet[1316]: E0815 17:45:56.768267    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743956768025479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:45:56 ha-683878 kubelet[1316]: E0815 17:45:56.768291    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743956768025479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:46:06 ha-683878 kubelet[1316]: E0815 17:46:06.769600    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743966769330588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:46:06 ha-683878 kubelet[1316]: E0815 17:46:06.769642    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743966769330588,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:46:16 ha-683878 kubelet[1316]: E0815 17:46:16.494384    1316 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 17:46:16 ha-683878 kubelet[1316]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 17:46:16 ha-683878 kubelet[1316]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 17:46:16 ha-683878 kubelet[1316]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 17:46:16 ha-683878 kubelet[1316]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 17:46:16 ha-683878 kubelet[1316]: E0815 17:46:16.771276    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743976771050838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:46:16 ha-683878 kubelet[1316]: E0815 17:46:16.771327    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743976771050838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:46:26 ha-683878 kubelet[1316]: E0815 17:46:26.773281    1316 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743986772972110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 17:46:26 ha-683878 kubelet[1316]: E0815 17:46:26.773344    1316 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723743986772972110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:155608,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 17:46:27.593358   41186 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19450-13013/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-683878 -n ha-683878
helpers_test.go:261: (dbg) Run:  kubectl --context ha-683878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (328.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-769827
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-769827
E0815 18:02:47.733947   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-769827: exit status 82 (2m1.838791968s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-769827-m03"  ...
	* Stopping node "multinode-769827-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-769827" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-769827 --wait=true -v=8 --alsologtostderr
E0815 18:04:52.218675   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:05:50.800048   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-769827 --wait=true -v=8 --alsologtostderr: (3m24.283862184s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-769827
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-769827 -n multinode-769827
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-769827 logs -n 25: (1.433836563s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-769827 cp multinode-769827-m02:/home/docker/cp-test.txt                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3791465198/001/cp-test_multinode-769827-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-769827 cp multinode-769827-m02:/home/docker/cp-test.txt                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827:/home/docker/cp-test_multinode-769827-m02_multinode-769827.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n multinode-769827 sudo cat                                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | /home/docker/cp-test_multinode-769827-m02_multinode-769827.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-769827 cp multinode-769827-m02:/home/docker/cp-test.txt                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03:/home/docker/cp-test_multinode-769827-m02_multinode-769827-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n multinode-769827-m03 sudo cat                                   | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | /home/docker/cp-test_multinode-769827-m02_multinode-769827-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-769827 cp testdata/cp-test.txt                                                | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-769827 cp multinode-769827-m03:/home/docker/cp-test.txt                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3791465198/001/cp-test_multinode-769827-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-769827 cp multinode-769827-m03:/home/docker/cp-test.txt                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827:/home/docker/cp-test_multinode-769827-m03_multinode-769827.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n multinode-769827 sudo cat                                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | /home/docker/cp-test_multinode-769827-m03_multinode-769827.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-769827 cp multinode-769827-m03:/home/docker/cp-test.txt                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m02:/home/docker/cp-test_multinode-769827-m03_multinode-769827-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n multinode-769827-m02 sudo cat                                   | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | /home/docker/cp-test_multinode-769827-m03_multinode-769827-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-769827 node stop m03                                                          | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	| node    | multinode-769827 node start                                                             | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:02 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-769827                                                                | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:02 UTC |                     |
	| stop    | -p multinode-769827                                                                     | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:02 UTC |                     |
	| start   | -p multinode-769827                                                                     | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:04 UTC | 15 Aug 24 18:07 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-769827                                                                | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:07 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:04:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:04:04.536275   50711 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:04:04.536548   50711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:04:04.536557   50711 out.go:358] Setting ErrFile to fd 2...
	I0815 18:04:04.536562   50711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:04:04.536734   50711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:04:04.537227   50711 out.go:352] Setting JSON to false
	I0815 18:04:04.538056   50711 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6390,"bootTime":1723738654,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:04:04.538120   50711 start.go:139] virtualization: kvm guest
	I0815 18:04:04.540451   50711 out.go:177] * [multinode-769827] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:04:04.542097   50711 notify.go:220] Checking for updates...
	I0815 18:04:04.542137   50711 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:04:04.543622   50711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:04:04.545193   50711 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:04:04.546202   50711 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:04:04.547394   50711 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:04:04.548586   50711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:04:04.550282   50711 config.go:182] Loaded profile config "multinode-769827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:04:04.550369   50711 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:04:04.550900   50711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:04:04.550972   50711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:04:04.566990   50711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I0815 18:04:04.567422   50711 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:04:04.567949   50711 main.go:141] libmachine: Using API Version  1
	I0815 18:04:04.567968   50711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:04:04.568380   50711 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:04:04.568663   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:04:04.605070   50711 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:04:04.606399   50711 start.go:297] selected driver: kvm2
	I0815 18:04:04.606423   50711 start.go:901] validating driver "kvm2" against &{Name:multinode-769827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-769827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.143 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:04:04.606563   50711 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:04:04.606901   50711 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:04:04.606963   50711 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:04:04.621754   50711 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:04:04.622389   50711 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:04:04.622449   50711 cni.go:84] Creating CNI manager for ""
	I0815 18:04:04.622460   50711 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 18:04:04.622517   50711 start.go:340] cluster config:
	{Name:multinode-769827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-769827 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.143 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:04:04.622630   50711 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:04:04.625122   50711 out.go:177] * Starting "multinode-769827" primary control-plane node in "multinode-769827" cluster
	I0815 18:04:04.626386   50711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:04:04.626422   50711 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:04:04.626437   50711 cache.go:56] Caching tarball of preloaded images
	I0815 18:04:04.626506   50711 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:04:04.626516   50711 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 18:04:04.626620   50711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/config.json ...
	I0815 18:04:04.626795   50711 start.go:360] acquireMachinesLock for multinode-769827: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:04:04.626832   50711 start.go:364] duration metric: took 21.682µs to acquireMachinesLock for "multinode-769827"
	I0815 18:04:04.626855   50711 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:04:04.626862   50711 fix.go:54] fixHost starting: 
	I0815 18:04:04.627123   50711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:04:04.627153   50711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:04:04.641317   50711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45169
	I0815 18:04:04.641779   50711 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:04:04.642281   50711 main.go:141] libmachine: Using API Version  1
	I0815 18:04:04.642302   50711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:04:04.642683   50711 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:04:04.642849   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:04:04.642997   50711 main.go:141] libmachine: (multinode-769827) Calling .GetState
	I0815 18:04:04.644573   50711 fix.go:112] recreateIfNeeded on multinode-769827: state=Running err=<nil>
	W0815 18:04:04.644600   50711 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:04:04.646559   50711 out.go:177] * Updating the running kvm2 "multinode-769827" VM ...
	I0815 18:04:04.647738   50711 machine.go:93] provisionDockerMachine start ...
	I0815 18:04:04.647762   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:04:04.647960   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:04:04.650584   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:04.651000   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:04.651039   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:04.651164   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:04:04.651338   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:04.651513   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:04.651656   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:04:04.651820   50711 main.go:141] libmachine: Using SSH client type: native
	I0815 18:04:04.652033   50711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0815 18:04:04.652048   50711 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:04:04.770393   50711 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-769827
	
	I0815 18:04:04.770417   50711 main.go:141] libmachine: (multinode-769827) Calling .GetMachineName
	I0815 18:04:04.770717   50711 buildroot.go:166] provisioning hostname "multinode-769827"
	I0815 18:04:04.770741   50711 main.go:141] libmachine: (multinode-769827) Calling .GetMachineName
	I0815 18:04:04.770916   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:04:04.773577   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:04.773957   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:04.773992   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:04.774068   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:04:04.774258   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:04.774397   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:04.774542   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:04:04.774730   50711 main.go:141] libmachine: Using SSH client type: native
	I0815 18:04:04.774902   50711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0815 18:04:04.774915   50711 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-769827 && echo "multinode-769827" | sudo tee /etc/hostname
	I0815 18:04:04.912302   50711 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-769827
	
	I0815 18:04:04.912334   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:04:04.914903   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:04.915217   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:04.915260   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:04.915416   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:04:04.915608   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:04.915767   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:04.916003   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:04:04.916173   50711 main.go:141] libmachine: Using SSH client type: native
	I0815 18:04:04.916332   50711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0815 18:04:04.916348   50711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-769827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-769827/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-769827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:04:05.034355   50711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:04:05.034405   50711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:04:05.034436   50711 buildroot.go:174] setting up certificates
	I0815 18:04:05.034452   50711 provision.go:84] configureAuth start
	I0815 18:04:05.034469   50711 main.go:141] libmachine: (multinode-769827) Calling .GetMachineName
	I0815 18:04:05.034705   50711 main.go:141] libmachine: (multinode-769827) Calling .GetIP
	I0815 18:04:05.037455   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.037818   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:05.037844   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.037962   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:04:05.040038   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.040524   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:05.040550   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.040674   50711 provision.go:143] copyHostCerts
	I0815 18:04:05.040703   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:04:05.040742   50711 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:04:05.040755   50711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:04:05.040823   50711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:04:05.040906   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:04:05.040930   50711 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:04:05.040939   50711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:04:05.040973   50711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:04:05.041033   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:04:05.041053   50711 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:04:05.041062   50711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:04:05.041107   50711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:04:05.041179   50711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.multinode-769827 san=[127.0.0.1 192.168.39.73 localhost minikube multinode-769827]
	I0815 18:04:05.113018   50711 provision.go:177] copyRemoteCerts
	I0815 18:04:05.113081   50711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:04:05.113102   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:04:05.115719   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.116028   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:05.116056   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.116183   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:04:05.116356   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:05.116529   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:04:05.116644   50711 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/multinode-769827/id_rsa Username:docker}
	I0815 18:04:05.208573   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 18:04:05.208635   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:04:05.233618   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 18:04:05.233684   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0815 18:04:05.258637   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 18:04:05.258716   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 18:04:05.282418   50711 provision.go:87] duration metric: took 247.953382ms to configureAuth
	I0815 18:04:05.282441   50711 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:04:05.282662   50711 config.go:182] Loaded profile config "multinode-769827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:04:05.282734   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:04:05.285144   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.285534   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:05.285564   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.285700   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:04:05.285894   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:05.286044   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:05.286166   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:04:05.286305   50711 main.go:141] libmachine: Using SSH client type: native
	I0815 18:04:05.286472   50711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0815 18:04:05.286488   50711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:05:36.039243   50711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:05:36.039306   50711 machine.go:96] duration metric: took 1m31.391524206s to provisionDockerMachine
	I0815 18:05:36.039319   50711 start.go:293] postStartSetup for "multinode-769827" (driver="kvm2")
	I0815 18:05:36.039330   50711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:05:36.039351   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:05:36.039714   50711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:05:36.039747   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:05:36.042693   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.043156   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:05:36.043186   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.043347   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:05:36.043513   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:05:36.043653   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:05:36.043762   50711 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/multinode-769827/id_rsa Username:docker}
	I0815 18:05:36.132030   50711 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:05:36.136472   50711 command_runner.go:130] > NAME=Buildroot
	I0815 18:05:36.136508   50711 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0815 18:05:36.136515   50711 command_runner.go:130] > ID=buildroot
	I0815 18:05:36.136526   50711 command_runner.go:130] > VERSION_ID=2023.02.9
	I0815 18:05:36.136534   50711 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0815 18:05:36.136575   50711 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:05:36.136593   50711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:05:36.136662   50711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:05:36.136734   50711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:05:36.136743   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /etc/ssl/certs/202192.pem
	I0815 18:05:36.136823   50711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:05:36.146456   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:05:36.171777   50711 start.go:296] duration metric: took 132.445038ms for postStartSetup
	I0815 18:05:36.171822   50711 fix.go:56] duration metric: took 1m31.54495919s for fixHost
	I0815 18:05:36.171846   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:05:36.174685   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.175156   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:05:36.175208   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.175364   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:05:36.175564   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:05:36.175702   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:05:36.175819   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:05:36.175961   50711 main.go:141] libmachine: Using SSH client type: native
	I0815 18:05:36.176142   50711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0815 18:05:36.176156   50711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:05:36.289590   50711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723745136.269521186
	
	I0815 18:05:36.289621   50711 fix.go:216] guest clock: 1723745136.269521186
	I0815 18:05:36.289633   50711 fix.go:229] Guest: 2024-08-15 18:05:36.269521186 +0000 UTC Remote: 2024-08-15 18:05:36.171828223 +0000 UTC m=+91.669935516 (delta=97.692963ms)
	I0815 18:05:36.289662   50711 fix.go:200] guest clock delta is within tolerance: 97.692963ms
	I0815 18:05:36.289688   50711 start.go:83] releasing machines lock for "multinode-769827", held for 1m31.662827859s
	I0815 18:05:36.289719   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:05:36.289990   50711 main.go:141] libmachine: (multinode-769827) Calling .GetIP
	I0815 18:05:36.292957   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.293289   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:05:36.293319   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.293610   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:05:36.294114   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:05:36.294275   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:05:36.294384   50711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:05:36.294417   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:05:36.294504   50711 ssh_runner.go:195] Run: cat /version.json
	I0815 18:05:36.294522   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:05:36.296878   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.297149   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.297220   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:05:36.297240   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.297406   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:05:36.297557   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:05:36.297588   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:05:36.297651   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.297704   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:05:36.297790   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:05:36.297871   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:05:36.297954   50711 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/multinode-769827/id_rsa Username:docker}
	I0815 18:05:36.297980   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:05:36.298089   50711 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/multinode-769827/id_rsa Username:docker}
	I0815 18:05:36.377837   50711 command_runner.go:130] > {"iso_version": "v1.33.1-1723650137-19443", "kicbase_version": "v0.0.44-1723567951-19429", "minikube_version": "v1.33.1", "commit": "0de88034feeac7cdc6e3fa82af59b9e46ac52b3e"}
	I0815 18:05:36.378030   50711 ssh_runner.go:195] Run: systemctl --version
	I0815 18:05:36.402962   50711 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0815 18:05:36.403014   50711 command_runner.go:130] > systemd 252 (252)
	I0815 18:05:36.403038   50711 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0815 18:05:36.403109   50711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:05:36.566434   50711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 18:05:36.573211   50711 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0815 18:05:36.573669   50711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:05:36.573747   50711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:05:36.583143   50711 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 18:05:36.583164   50711 start.go:495] detecting cgroup driver to use...
	I0815 18:05:36.583233   50711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:05:36.600529   50711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:05:36.615338   50711 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:05:36.615428   50711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:05:36.629543   50711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:05:36.642960   50711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:05:36.788221   50711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:05:36.931352   50711 docker.go:233] disabling docker service ...
	I0815 18:05:36.931425   50711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:05:36.947024   50711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:05:36.960693   50711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:05:37.100696   50711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:05:37.254567   50711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:05:37.269424   50711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:05:37.288346   50711 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0815 18:05:37.288654   50711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:05:37.288704   50711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.299633   50711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:05:37.299698   50711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.310602   50711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.321176   50711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.332819   50711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:05:37.344207   50711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.355147   50711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.366091   50711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.376990   50711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:05:37.387168   50711 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0815 18:05:37.387261   50711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:05:37.396602   50711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:05:37.533930   50711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:05:42.582037   50711 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.048069221s)
	I0815 18:05:42.582077   50711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:05:42.582140   50711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:05:42.587179   50711 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0815 18:05:42.587205   50711 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0815 18:05:42.587219   50711 command_runner.go:130] > Device: 0,22	Inode: 1323        Links: 1
	I0815 18:05:42.587228   50711 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0815 18:05:42.587235   50711 command_runner.go:130] > Access: 2024-08-15 18:05:42.457805645 +0000
	I0815 18:05:42.587244   50711 command_runner.go:130] > Modify: 2024-08-15 18:05:42.457805645 +0000
	I0815 18:05:42.587254   50711 command_runner.go:130] > Change: 2024-08-15 18:05:42.457805645 +0000
	I0815 18:05:42.587260   50711 command_runner.go:130] >  Birth: -
	I0815 18:05:42.587280   50711 start.go:563] Will wait 60s for crictl version
	I0815 18:05:42.587321   50711 ssh_runner.go:195] Run: which crictl
	I0815 18:05:42.591344   50711 command_runner.go:130] > /usr/bin/crictl
	I0815 18:05:42.591427   50711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:05:42.630320   50711 command_runner.go:130] > Version:  0.1.0
	I0815 18:05:42.630460   50711 command_runner.go:130] > RuntimeName:  cri-o
	I0815 18:05:42.630479   50711 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0815 18:05:42.630573   50711 command_runner.go:130] > RuntimeApiVersion:  v1
	I0815 18:05:42.631777   50711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:05:42.631840   50711 ssh_runner.go:195] Run: crio --version
	I0815 18:05:42.658659   50711 command_runner.go:130] > crio version 1.29.1
	I0815 18:05:42.658681   50711 command_runner.go:130] > Version:        1.29.1
	I0815 18:05:42.658687   50711 command_runner.go:130] > GitCommit:      unknown
	I0815 18:05:42.658691   50711 command_runner.go:130] > GitCommitDate:  unknown
	I0815 18:05:42.658695   50711 command_runner.go:130] > GitTreeState:   clean
	I0815 18:05:42.658700   50711 command_runner.go:130] > BuildDate:      2024-08-14T19:54:05Z
	I0815 18:05:42.658705   50711 command_runner.go:130] > GoVersion:      go1.21.6
	I0815 18:05:42.658708   50711 command_runner.go:130] > Compiler:       gc
	I0815 18:05:42.658713   50711 command_runner.go:130] > Platform:       linux/amd64
	I0815 18:05:42.658716   50711 command_runner.go:130] > Linkmode:       dynamic
	I0815 18:05:42.658720   50711 command_runner.go:130] > BuildTags:      
	I0815 18:05:42.658724   50711 command_runner.go:130] >   containers_image_ostree_stub
	I0815 18:05:42.658729   50711 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0815 18:05:42.658733   50711 command_runner.go:130] >   btrfs_noversion
	I0815 18:05:42.658738   50711 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0815 18:05:42.658744   50711 command_runner.go:130] >   libdm_no_deferred_remove
	I0815 18:05:42.658748   50711 command_runner.go:130] >   seccomp
	I0815 18:05:42.658771   50711 command_runner.go:130] > LDFlags:          unknown
	I0815 18:05:42.658779   50711 command_runner.go:130] > SeccompEnabled:   true
	I0815 18:05:42.658784   50711 command_runner.go:130] > AppArmorEnabled:  false
	I0815 18:05:42.659967   50711 ssh_runner.go:195] Run: crio --version
	I0815 18:05:42.688024   50711 command_runner.go:130] > crio version 1.29.1
	I0815 18:05:42.688054   50711 command_runner.go:130] > Version:        1.29.1
	I0815 18:05:42.688062   50711 command_runner.go:130] > GitCommit:      unknown
	I0815 18:05:42.688068   50711 command_runner.go:130] > GitCommitDate:  unknown
	I0815 18:05:42.688073   50711 command_runner.go:130] > GitTreeState:   clean
	I0815 18:05:42.688080   50711 command_runner.go:130] > BuildDate:      2024-08-14T19:54:05Z
	I0815 18:05:42.688086   50711 command_runner.go:130] > GoVersion:      go1.21.6
	I0815 18:05:42.688092   50711 command_runner.go:130] > Compiler:       gc
	I0815 18:05:42.688098   50711 command_runner.go:130] > Platform:       linux/amd64
	I0815 18:05:42.688104   50711 command_runner.go:130] > Linkmode:       dynamic
	I0815 18:05:42.688110   50711 command_runner.go:130] > BuildTags:      
	I0815 18:05:42.688116   50711 command_runner.go:130] >   containers_image_ostree_stub
	I0815 18:05:42.688123   50711 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0815 18:05:42.688129   50711 command_runner.go:130] >   btrfs_noversion
	I0815 18:05:42.688135   50711 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0815 18:05:42.688141   50711 command_runner.go:130] >   libdm_no_deferred_remove
	I0815 18:05:42.688154   50711 command_runner.go:130] >   seccomp
	I0815 18:05:42.688165   50711 command_runner.go:130] > LDFlags:          unknown
	I0815 18:05:42.688172   50711 command_runner.go:130] > SeccompEnabled:   true
	I0815 18:05:42.688180   50711 command_runner.go:130] > AppArmorEnabled:  false
	I0815 18:05:42.690163   50711 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:05:42.691354   50711 main.go:141] libmachine: (multinode-769827) Calling .GetIP
	I0815 18:05:42.694137   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:42.694485   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:05:42.694509   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:42.694718   50711 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:05:42.699044   50711 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0815 18:05:42.699141   50711 kubeadm.go:883] updating cluster {Name:multinode-769827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-769827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.143 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:05:42.699264   50711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:05:42.699303   50711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:05:42.744665   50711 command_runner.go:130] > {
	I0815 18:05:42.744690   50711 command_runner.go:130] >   "images": [
	I0815 18:05:42.744695   50711 command_runner.go:130] >     {
	I0815 18:05:42.744703   50711 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0815 18:05:42.744708   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.744713   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0815 18:05:42.744717   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744721   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.744729   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0815 18:05:42.744735   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0815 18:05:42.744747   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744752   50711 command_runner.go:130] >       "size": "87165492",
	I0815 18:05:42.744757   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.744760   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.744766   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.744775   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.744779   50711 command_runner.go:130] >     },
	I0815 18:05:42.744785   50711 command_runner.go:130] >     {
	I0815 18:05:42.744791   50711 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0815 18:05:42.744795   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.744800   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0815 18:05:42.744804   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744808   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.744815   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0815 18:05:42.744825   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0815 18:05:42.744831   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744835   50711 command_runner.go:130] >       "size": "87190579",
	I0815 18:05:42.744841   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.744850   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.744857   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.744861   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.744864   50711 command_runner.go:130] >     },
	I0815 18:05:42.744868   50711 command_runner.go:130] >     {
	I0815 18:05:42.744874   50711 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0815 18:05:42.744878   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.744883   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0815 18:05:42.744887   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744891   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.744898   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0815 18:05:42.744907   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0815 18:05:42.744911   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744917   50711 command_runner.go:130] >       "size": "1363676",
	I0815 18:05:42.744921   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.744925   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.744930   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.744934   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.744941   50711 command_runner.go:130] >     },
	I0815 18:05:42.744947   50711 command_runner.go:130] >     {
	I0815 18:05:42.744953   50711 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0815 18:05:42.744957   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.744962   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0815 18:05:42.744966   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744970   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.744977   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0815 18:05:42.744992   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0815 18:05:42.744998   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745002   50711 command_runner.go:130] >       "size": "31470524",
	I0815 18:05:42.745008   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.745013   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745019   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745023   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745029   50711 command_runner.go:130] >     },
	I0815 18:05:42.745033   50711 command_runner.go:130] >     {
	I0815 18:05:42.745041   50711 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0815 18:05:42.745046   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745050   50711 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0815 18:05:42.745056   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745059   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745072   50711 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0815 18:05:42.745081   50711 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0815 18:05:42.745087   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745092   50711 command_runner.go:130] >       "size": "61245718",
	I0815 18:05:42.745115   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.745119   50711 command_runner.go:130] >       "username": "nonroot",
	I0815 18:05:42.745123   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745127   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745131   50711 command_runner.go:130] >     },
	I0815 18:05:42.745136   50711 command_runner.go:130] >     {
	I0815 18:05:42.745142   50711 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0815 18:05:42.745148   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745153   50711 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0815 18:05:42.745158   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745167   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745175   50711 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0815 18:05:42.745184   50711 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0815 18:05:42.745192   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745198   50711 command_runner.go:130] >       "size": "149009664",
	I0815 18:05:42.745202   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.745207   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.745211   50711 command_runner.go:130] >       },
	I0815 18:05:42.745217   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745221   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745227   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745230   50711 command_runner.go:130] >     },
	I0815 18:05:42.745236   50711 command_runner.go:130] >     {
	I0815 18:05:42.745242   50711 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0815 18:05:42.745248   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745252   50711 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0815 18:05:42.745258   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745262   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745271   50711 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0815 18:05:42.745281   50711 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0815 18:05:42.745287   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745292   50711 command_runner.go:130] >       "size": "95233506",
	I0815 18:05:42.745297   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.745302   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.745307   50711 command_runner.go:130] >       },
	I0815 18:05:42.745310   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745316   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745320   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745326   50711 command_runner.go:130] >     },
	I0815 18:05:42.745329   50711 command_runner.go:130] >     {
	I0815 18:05:42.745337   50711 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0815 18:05:42.745343   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745350   50711 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0815 18:05:42.745356   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745359   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745380   50711 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0815 18:05:42.745395   50711 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0815 18:05:42.745401   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745410   50711 command_runner.go:130] >       "size": "89437512",
	I0815 18:05:42.745416   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.745420   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.745426   50711 command_runner.go:130] >       },
	I0815 18:05:42.745429   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745433   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745436   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745440   50711 command_runner.go:130] >     },
	I0815 18:05:42.745443   50711 command_runner.go:130] >     {
	I0815 18:05:42.745448   50711 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0815 18:05:42.745452   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745456   50711 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0815 18:05:42.745460   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745464   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745471   50711 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0815 18:05:42.745477   50711 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0815 18:05:42.745481   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745485   50711 command_runner.go:130] >       "size": "92728217",
	I0815 18:05:42.745488   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.745491   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745495   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745498   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745501   50711 command_runner.go:130] >     },
	I0815 18:05:42.745505   50711 command_runner.go:130] >     {
	I0815 18:05:42.745510   50711 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0815 18:05:42.745514   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745518   50711 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0815 18:05:42.745521   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745525   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745534   50711 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0815 18:05:42.745543   50711 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0815 18:05:42.745549   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745553   50711 command_runner.go:130] >       "size": "68420936",
	I0815 18:05:42.745559   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.745567   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.745574   50711 command_runner.go:130] >       },
	I0815 18:05:42.745578   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745584   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745588   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745594   50711 command_runner.go:130] >     },
	I0815 18:05:42.745597   50711 command_runner.go:130] >     {
	I0815 18:05:42.745604   50711 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0815 18:05:42.745609   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745614   50711 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0815 18:05:42.745620   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745624   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745635   50711 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0815 18:05:42.745643   50711 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0815 18:05:42.745647   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745651   50711 command_runner.go:130] >       "size": "742080",
	I0815 18:05:42.745654   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.745661   50711 command_runner.go:130] >         "value": "65535"
	I0815 18:05:42.745664   50711 command_runner.go:130] >       },
	I0815 18:05:42.745668   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745672   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745675   50711 command_runner.go:130] >       "pinned": true
	I0815 18:05:42.745678   50711 command_runner.go:130] >     }
	I0815 18:05:42.745681   50711 command_runner.go:130] >   ]
	I0815 18:05:42.745686   50711 command_runner.go:130] > }
	I0815 18:05:42.745957   50711 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:05:42.745972   50711 crio.go:433] Images already preloaded, skipping extraction
	I0815 18:05:42.746014   50711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:05:42.778673   50711 command_runner.go:130] > {
	I0815 18:05:42.778695   50711 command_runner.go:130] >   "images": [
	I0815 18:05:42.778699   50711 command_runner.go:130] >     {
	I0815 18:05:42.778707   50711 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0815 18:05:42.778712   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.778718   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0815 18:05:42.778721   50711 command_runner.go:130] >       ],
	I0815 18:05:42.778725   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.778733   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0815 18:05:42.778740   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0815 18:05:42.778744   50711 command_runner.go:130] >       ],
	I0815 18:05:42.778749   50711 command_runner.go:130] >       "size": "87165492",
	I0815 18:05:42.778755   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.778759   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.778771   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.778778   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.778781   50711 command_runner.go:130] >     },
	I0815 18:05:42.778784   50711 command_runner.go:130] >     {
	I0815 18:05:42.778790   50711 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0815 18:05:42.778796   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.778802   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0815 18:05:42.778808   50711 command_runner.go:130] >       ],
	I0815 18:05:42.778812   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.778821   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0815 18:05:42.778830   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0815 18:05:42.778836   50711 command_runner.go:130] >       ],
	I0815 18:05:42.778845   50711 command_runner.go:130] >       "size": "87190579",
	I0815 18:05:42.778851   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.778860   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.778867   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.778871   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.778877   50711 command_runner.go:130] >     },
	I0815 18:05:42.778888   50711 command_runner.go:130] >     {
	I0815 18:05:42.778896   50711 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0815 18:05:42.778901   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.778907   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0815 18:05:42.778912   50711 command_runner.go:130] >       ],
	I0815 18:05:42.778916   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.778925   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0815 18:05:42.778934   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0815 18:05:42.778937   50711 command_runner.go:130] >       ],
	I0815 18:05:42.778942   50711 command_runner.go:130] >       "size": "1363676",
	I0815 18:05:42.778946   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.778954   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.778958   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.778965   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.778968   50711 command_runner.go:130] >     },
	I0815 18:05:42.778974   50711 command_runner.go:130] >     {
	I0815 18:05:42.778980   50711 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0815 18:05:42.778987   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.778992   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0815 18:05:42.778998   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779002   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779011   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0815 18:05:42.779027   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0815 18:05:42.779033   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779038   50711 command_runner.go:130] >       "size": "31470524",
	I0815 18:05:42.779044   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.779047   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779053   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779057   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779063   50711 command_runner.go:130] >     },
	I0815 18:05:42.779067   50711 command_runner.go:130] >     {
	I0815 18:05:42.779076   50711 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0815 18:05:42.779086   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779096   50711 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0815 18:05:42.779102   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779107   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779120   50711 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0815 18:05:42.779129   50711 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0815 18:05:42.779135   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779140   50711 command_runner.go:130] >       "size": "61245718",
	I0815 18:05:42.779145   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.779150   50711 command_runner.go:130] >       "username": "nonroot",
	I0815 18:05:42.779156   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779160   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779165   50711 command_runner.go:130] >     },
	I0815 18:05:42.779169   50711 command_runner.go:130] >     {
	I0815 18:05:42.779177   50711 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0815 18:05:42.779183   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779188   50711 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0815 18:05:42.779193   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779198   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779207   50711 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0815 18:05:42.779215   50711 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0815 18:05:42.779220   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779224   50711 command_runner.go:130] >       "size": "149009664",
	I0815 18:05:42.779230   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.779235   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.779240   50711 command_runner.go:130] >       },
	I0815 18:05:42.779244   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779250   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779254   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779260   50711 command_runner.go:130] >     },
	I0815 18:05:42.779263   50711 command_runner.go:130] >     {
	I0815 18:05:42.779271   50711 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0815 18:05:42.779277   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779282   50711 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0815 18:05:42.779288   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779292   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779301   50711 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0815 18:05:42.779310   50711 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0815 18:05:42.779316   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779321   50711 command_runner.go:130] >       "size": "95233506",
	I0815 18:05:42.779336   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.779343   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.779346   50711 command_runner.go:130] >       },
	I0815 18:05:42.779350   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779354   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779358   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779361   50711 command_runner.go:130] >     },
	I0815 18:05:42.779365   50711 command_runner.go:130] >     {
	I0815 18:05:42.779372   50711 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0815 18:05:42.779376   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779383   50711 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0815 18:05:42.779387   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779393   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779414   50711 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0815 18:05:42.779424   50711 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0815 18:05:42.779428   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779431   50711 command_runner.go:130] >       "size": "89437512",
	I0815 18:05:42.779435   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.779442   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.779445   50711 command_runner.go:130] >       },
	I0815 18:05:42.779450   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779453   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779460   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779464   50711 command_runner.go:130] >     },
	I0815 18:05:42.779469   50711 command_runner.go:130] >     {
	I0815 18:05:42.779474   50711 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0815 18:05:42.779480   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779485   50711 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0815 18:05:42.779491   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779497   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779506   50711 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0815 18:05:42.779515   50711 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0815 18:05:42.779520   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779524   50711 command_runner.go:130] >       "size": "92728217",
	I0815 18:05:42.779531   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.779535   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779545   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779551   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779555   50711 command_runner.go:130] >     },
	I0815 18:05:42.779567   50711 command_runner.go:130] >     {
	I0815 18:05:42.779575   50711 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0815 18:05:42.779579   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779587   50711 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0815 18:05:42.779592   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779597   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779606   50711 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0815 18:05:42.779615   50711 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0815 18:05:42.779620   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779627   50711 command_runner.go:130] >       "size": "68420936",
	I0815 18:05:42.779631   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.779637   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.779641   50711 command_runner.go:130] >       },
	I0815 18:05:42.779647   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779651   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779657   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779660   50711 command_runner.go:130] >     },
	I0815 18:05:42.779666   50711 command_runner.go:130] >     {
	I0815 18:05:42.779672   50711 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0815 18:05:42.779678   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779683   50711 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0815 18:05:42.779688   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779693   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779701   50711 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0815 18:05:42.779710   50711 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0815 18:05:42.779714   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779720   50711 command_runner.go:130] >       "size": "742080",
	I0815 18:05:42.779724   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.779728   50711 command_runner.go:130] >         "value": "65535"
	I0815 18:05:42.779732   50711 command_runner.go:130] >       },
	I0815 18:05:42.779735   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779741   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779745   50711 command_runner.go:130] >       "pinned": true
	I0815 18:05:42.779752   50711 command_runner.go:130] >     }
	I0815 18:05:42.779758   50711 command_runner.go:130] >   ]
	I0815 18:05:42.779760   50711 command_runner.go:130] > }
	I0815 18:05:42.780137   50711 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:05:42.780157   50711 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:05:42.780165   50711 kubeadm.go:934] updating node { 192.168.39.73 8443 v1.31.0 crio true true} ...
	I0815 18:05:42.780261   50711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-769827 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-769827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:05:42.780346   50711 ssh_runner.go:195] Run: crio config
	I0815 18:05:42.821002   50711 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0815 18:05:42.821039   50711 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0815 18:05:42.821050   50711 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0815 18:05:42.821056   50711 command_runner.go:130] > #
	I0815 18:05:42.821090   50711 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0815 18:05:42.821103   50711 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0815 18:05:42.821115   50711 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0815 18:05:42.821125   50711 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0815 18:05:42.821129   50711 command_runner.go:130] > # reload'.
	I0815 18:05:42.821135   50711 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0815 18:05:42.821141   50711 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0815 18:05:42.821148   50711 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0815 18:05:42.821154   50711 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0815 18:05:42.821159   50711 command_runner.go:130] > [crio]
	I0815 18:05:42.821171   50711 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0815 18:05:42.821179   50711 command_runner.go:130] > # containers images, in this directory.
	I0815 18:05:42.821188   50711 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0815 18:05:42.821201   50711 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0815 18:05:42.821225   50711 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0815 18:05:42.821240   50711 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0815 18:05:42.821443   50711 command_runner.go:130] > # imagestore = ""
	I0815 18:05:42.821460   50711 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0815 18:05:42.821470   50711 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0815 18:05:42.821788   50711 command_runner.go:130] > storage_driver = "overlay"
	I0815 18:05:42.821804   50711 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0815 18:05:42.821810   50711 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0815 18:05:42.821814   50711 command_runner.go:130] > storage_option = [
	I0815 18:05:42.822724   50711 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0815 18:05:42.822737   50711 command_runner.go:130] > ]
	I0815 18:05:42.822747   50711 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0815 18:05:42.822757   50711 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0815 18:05:42.822764   50711 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0815 18:05:42.822773   50711 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0815 18:05:42.822786   50711 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0815 18:05:42.822797   50711 command_runner.go:130] > # always happen on a node reboot
	I0815 18:05:42.822804   50711 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0815 18:05:42.822826   50711 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0815 18:05:42.822840   50711 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0815 18:05:42.822848   50711 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0815 18:05:42.822855   50711 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0815 18:05:42.822871   50711 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0815 18:05:42.822885   50711 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0815 18:05:42.822895   50711 command_runner.go:130] > # internal_wipe = true
	I0815 18:05:42.822907   50711 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0815 18:05:42.822919   50711 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0815 18:05:42.822929   50711 command_runner.go:130] > # internal_repair = false
	I0815 18:05:42.822936   50711 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0815 18:05:42.822943   50711 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0815 18:05:42.822949   50711 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0815 18:05:42.822958   50711 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0815 18:05:42.822967   50711 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0815 18:05:42.822985   50711 command_runner.go:130] > [crio.api]
	I0815 18:05:42.822997   50711 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0815 18:05:42.823008   50711 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0815 18:05:42.823022   50711 command_runner.go:130] > # IP address on which the stream server will listen.
	I0815 18:05:42.823032   50711 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0815 18:05:42.823055   50711 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0815 18:05:42.823069   50711 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0815 18:05:42.823075   50711 command_runner.go:130] > # stream_port = "0"
	I0815 18:05:42.823086   50711 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0815 18:05:42.823096   50711 command_runner.go:130] > # stream_enable_tls = false
	I0815 18:05:42.823103   50711 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0815 18:05:42.823109   50711 command_runner.go:130] > # stream_idle_timeout = ""
	I0815 18:05:42.823115   50711 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0815 18:05:42.823124   50711 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0815 18:05:42.823128   50711 command_runner.go:130] > # minutes.
	I0815 18:05:42.823134   50711 command_runner.go:130] > # stream_tls_cert = ""
	I0815 18:05:42.823140   50711 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0815 18:05:42.823151   50711 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0815 18:05:42.823159   50711 command_runner.go:130] > # stream_tls_key = ""
	I0815 18:05:42.823174   50711 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0815 18:05:42.823187   50711 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0815 18:05:42.823211   50711 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0815 18:05:42.823220   50711 command_runner.go:130] > # stream_tls_ca = ""
	I0815 18:05:42.823231   50711 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0815 18:05:42.823241   50711 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0815 18:05:42.823252   50711 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0815 18:05:42.823261   50711 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0815 18:05:42.823269   50711 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0815 18:05:42.823280   50711 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0815 18:05:42.823287   50711 command_runner.go:130] > [crio.runtime]
	I0815 18:05:42.823296   50711 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0815 18:05:42.823307   50711 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0815 18:05:42.823316   50711 command_runner.go:130] > # "nofile=1024:2048"
	I0815 18:05:42.823324   50711 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0815 18:05:42.823333   50711 command_runner.go:130] > # default_ulimits = [
	I0815 18:05:42.823338   50711 command_runner.go:130] > # ]
	I0815 18:05:42.823362   50711 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0815 18:05:42.823372   50711 command_runner.go:130] > # no_pivot = false
	I0815 18:05:42.823380   50711 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0815 18:05:42.823391   50711 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0815 18:05:42.823400   50711 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0815 18:05:42.823408   50711 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0815 18:05:42.823418   50711 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0815 18:05:42.823428   50711 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0815 18:05:42.823438   50711 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0815 18:05:42.823445   50711 command_runner.go:130] > # Cgroup setting for conmon
	I0815 18:05:42.823458   50711 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0815 18:05:42.823468   50711 command_runner.go:130] > conmon_cgroup = "pod"
	I0815 18:05:42.823477   50711 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0815 18:05:42.823487   50711 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0815 18:05:42.823497   50711 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0815 18:05:42.823506   50711 command_runner.go:130] > conmon_env = [
	I0815 18:05:42.823514   50711 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0815 18:05:42.823523   50711 command_runner.go:130] > ]
	I0815 18:05:42.823533   50711 command_runner.go:130] > # Additional environment variables to set for all the
	I0815 18:05:42.823544   50711 command_runner.go:130] > # containers. These are overridden if set in the
	I0815 18:05:42.823556   50711 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0815 18:05:42.823565   50711 command_runner.go:130] > # default_env = [
	I0815 18:05:42.823571   50711 command_runner.go:130] > # ]
	I0815 18:05:42.823583   50711 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0815 18:05:42.823596   50711 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0815 18:05:42.823605   50711 command_runner.go:130] > # selinux = false
	I0815 18:05:42.823615   50711 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0815 18:05:42.823628   50711 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0815 18:05:42.823640   50711 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0815 18:05:42.823650   50711 command_runner.go:130] > # seccomp_profile = ""
	I0815 18:05:42.823659   50711 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0815 18:05:42.823670   50711 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0815 18:05:42.823681   50711 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0815 18:05:42.823691   50711 command_runner.go:130] > # which might increase security.
	I0815 18:05:42.823702   50711 command_runner.go:130] > # This option is currently deprecated,
	I0815 18:05:42.823711   50711 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0815 18:05:42.823734   50711 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0815 18:05:42.823750   50711 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0815 18:05:42.823762   50711 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0815 18:05:42.823775   50711 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0815 18:05:42.823788   50711 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0815 18:05:42.823798   50711 command_runner.go:130] > # This option supports live configuration reload.
	I0815 18:05:42.823805   50711 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0815 18:05:42.823816   50711 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0815 18:05:42.823823   50711 command_runner.go:130] > # the cgroup blockio controller.
	I0815 18:05:42.823832   50711 command_runner.go:130] > # blockio_config_file = ""
	I0815 18:05:42.823843   50711 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0815 18:05:42.823852   50711 command_runner.go:130] > # blockio parameters.
	I0815 18:05:42.823858   50711 command_runner.go:130] > # blockio_reload = false
	I0815 18:05:42.823872   50711 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0815 18:05:42.823880   50711 command_runner.go:130] > # irqbalance daemon.
	I0815 18:05:42.823888   50711 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0815 18:05:42.823901   50711 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0815 18:05:42.823914   50711 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0815 18:05:42.823928   50711 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0815 18:05:42.823940   50711 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0815 18:05:42.823952   50711 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0815 18:05:42.823963   50711 command_runner.go:130] > # This option supports live configuration reload.
	I0815 18:05:42.823969   50711 command_runner.go:130] > # rdt_config_file = ""
	I0815 18:05:42.823981   50711 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0815 18:05:42.823987   50711 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0815 18:05:42.824030   50711 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0815 18:05:42.824042   50711 command_runner.go:130] > # separate_pull_cgroup = ""
	I0815 18:05:42.824052   50711 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0815 18:05:42.824065   50711 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0815 18:05:42.824073   50711 command_runner.go:130] > # will be added.
	I0815 18:05:42.824079   50711 command_runner.go:130] > # default_capabilities = [
	I0815 18:05:42.824088   50711 command_runner.go:130] > # 	"CHOWN",
	I0815 18:05:42.824099   50711 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0815 18:05:42.824108   50711 command_runner.go:130] > # 	"FSETID",
	I0815 18:05:42.824113   50711 command_runner.go:130] > # 	"FOWNER",
	I0815 18:05:42.824122   50711 command_runner.go:130] > # 	"SETGID",
	I0815 18:05:42.824136   50711 command_runner.go:130] > # 	"SETUID",
	I0815 18:05:42.824144   50711 command_runner.go:130] > # 	"SETPCAP",
	I0815 18:05:42.824152   50711 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0815 18:05:42.824156   50711 command_runner.go:130] > # 	"KILL",
	I0815 18:05:42.824159   50711 command_runner.go:130] > # ]
	I0815 18:05:42.824169   50711 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0815 18:05:42.824177   50711 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0815 18:05:42.824182   50711 command_runner.go:130] > # add_inheritable_capabilities = false
	I0815 18:05:42.824189   50711 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0815 18:05:42.824196   50711 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0815 18:05:42.824201   50711 command_runner.go:130] > default_sysctls = [
	I0815 18:05:42.824205   50711 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0815 18:05:42.824210   50711 command_runner.go:130] > ]
	I0815 18:05:42.824215   50711 command_runner.go:130] > # List of devices on the host that a
	I0815 18:05:42.824223   50711 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0815 18:05:42.824228   50711 command_runner.go:130] > # allowed_devices = [
	I0815 18:05:42.824231   50711 command_runner.go:130] > # 	"/dev/fuse",
	I0815 18:05:42.824237   50711 command_runner.go:130] > # ]
	I0815 18:05:42.824241   50711 command_runner.go:130] > # List of additional devices. specified as
	I0815 18:05:42.824250   50711 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0815 18:05:42.824257   50711 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0815 18:05:42.824263   50711 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0815 18:05:42.824268   50711 command_runner.go:130] > # additional_devices = [
	I0815 18:05:42.824272   50711 command_runner.go:130] > # ]
	I0815 18:05:42.824280   50711 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0815 18:05:42.824285   50711 command_runner.go:130] > # cdi_spec_dirs = [
	I0815 18:05:42.824291   50711 command_runner.go:130] > # 	"/etc/cdi",
	I0815 18:05:42.824296   50711 command_runner.go:130] > # 	"/var/run/cdi",
	I0815 18:05:42.824301   50711 command_runner.go:130] > # ]
	I0815 18:05:42.824308   50711 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0815 18:05:42.824315   50711 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0815 18:05:42.824322   50711 command_runner.go:130] > # Defaults to false.
	I0815 18:05:42.824326   50711 command_runner.go:130] > # device_ownership_from_security_context = false
	I0815 18:05:42.824332   50711 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0815 18:05:42.824339   50711 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0815 18:05:42.824343   50711 command_runner.go:130] > # hooks_dir = [
	I0815 18:05:42.824365   50711 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0815 18:05:42.824369   50711 command_runner.go:130] > # ]
	I0815 18:05:42.824374   50711 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0815 18:05:42.824382   50711 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0815 18:05:42.824389   50711 command_runner.go:130] > # its default mounts from the following two files:
	I0815 18:05:42.824393   50711 command_runner.go:130] > #
	I0815 18:05:42.824399   50711 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0815 18:05:42.824407   50711 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0815 18:05:42.824412   50711 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0815 18:05:42.824418   50711 command_runner.go:130] > #
	I0815 18:05:42.824423   50711 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0815 18:05:42.824432   50711 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0815 18:05:42.824438   50711 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0815 18:05:42.824445   50711 command_runner.go:130] > #      only add mounts it finds in this file.
	I0815 18:05:42.824448   50711 command_runner.go:130] > #
	I0815 18:05:42.824452   50711 command_runner.go:130] > # default_mounts_file = ""
	I0815 18:05:42.824459   50711 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0815 18:05:42.824468   50711 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0815 18:05:42.824472   50711 command_runner.go:130] > pids_limit = 1024
	I0815 18:05:42.824478   50711 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0815 18:05:42.824501   50711 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0815 18:05:42.824515   50711 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0815 18:05:42.824528   50711 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0815 18:05:42.824535   50711 command_runner.go:130] > # log_size_max = -1
	I0815 18:05:42.824541   50711 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0815 18:05:42.824547   50711 command_runner.go:130] > # log_to_journald = false
	I0815 18:05:42.824553   50711 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0815 18:05:42.824560   50711 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0815 18:05:42.824565   50711 command_runner.go:130] > # Path to directory for container attach sockets.
	I0815 18:05:42.824572   50711 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0815 18:05:42.824577   50711 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0815 18:05:42.824583   50711 command_runner.go:130] > # bind_mount_prefix = ""
	I0815 18:05:42.824588   50711 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0815 18:05:42.824594   50711 command_runner.go:130] > # read_only = false
	I0815 18:05:42.824600   50711 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0815 18:05:42.824608   50711 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0815 18:05:42.824619   50711 command_runner.go:130] > # live configuration reload.
	I0815 18:05:42.824626   50711 command_runner.go:130] > # log_level = "info"
	I0815 18:05:42.824631   50711 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0815 18:05:42.824645   50711 command_runner.go:130] > # This option supports live configuration reload.
	I0815 18:05:42.824651   50711 command_runner.go:130] > # log_filter = ""
	I0815 18:05:42.824657   50711 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0815 18:05:42.824667   50711 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0815 18:05:42.824673   50711 command_runner.go:130] > # separated by comma.
	I0815 18:05:42.824680   50711 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 18:05:42.824687   50711 command_runner.go:130] > # uid_mappings = ""
	I0815 18:05:42.824692   50711 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0815 18:05:42.824700   50711 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0815 18:05:42.824705   50711 command_runner.go:130] > # separated by comma.
	I0815 18:05:42.824712   50711 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 18:05:42.824718   50711 command_runner.go:130] > # gid_mappings = ""
	I0815 18:05:42.824724   50711 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0815 18:05:42.824735   50711 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0815 18:05:42.824743   50711 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0815 18:05:42.824750   50711 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 18:05:42.824756   50711 command_runner.go:130] > # minimum_mappable_uid = -1
	I0815 18:05:42.824762   50711 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0815 18:05:42.824770   50711 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0815 18:05:42.824778   50711 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0815 18:05:42.824786   50711 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 18:05:42.824792   50711 command_runner.go:130] > # minimum_mappable_gid = -1
	I0815 18:05:42.824798   50711 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0815 18:05:42.824806   50711 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0815 18:05:42.824817   50711 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0815 18:05:42.824823   50711 command_runner.go:130] > # ctr_stop_timeout = 30
	I0815 18:05:42.824828   50711 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0815 18:05:42.824835   50711 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0815 18:05:42.824840   50711 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0815 18:05:42.824847   50711 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0815 18:05:42.824851   50711 command_runner.go:130] > drop_infra_ctr = false
	I0815 18:05:42.824857   50711 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0815 18:05:42.824864   50711 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0815 18:05:42.824876   50711 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0815 18:05:42.824882   50711 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0815 18:05:42.824888   50711 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0815 18:05:42.824896   50711 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0815 18:05:42.824902   50711 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0815 18:05:42.824909   50711 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0815 18:05:42.824913   50711 command_runner.go:130] > # shared_cpuset = ""
	I0815 18:05:42.824919   50711 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0815 18:05:42.824925   50711 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0815 18:05:42.824929   50711 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0815 18:05:42.824938   50711 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0815 18:05:42.824944   50711 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0815 18:05:42.824949   50711 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0815 18:05:42.824957   50711 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0815 18:05:42.824964   50711 command_runner.go:130] > # enable_criu_support = false
	I0815 18:05:42.824969   50711 command_runner.go:130] > # Enable/disable the generation of the container,
	I0815 18:05:42.824976   50711 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0815 18:05:42.824981   50711 command_runner.go:130] > # enable_pod_events = false
	I0815 18:05:42.824987   50711 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0815 18:05:42.824995   50711 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0815 18:05:42.825000   50711 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0815 18:05:42.825006   50711 command_runner.go:130] > # default_runtime = "runc"
	I0815 18:05:42.825011   50711 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0815 18:05:42.825020   50711 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0815 18:05:42.825034   50711 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0815 18:05:42.825041   50711 command_runner.go:130] > # creation as a file is not desired either.
	I0815 18:05:42.825048   50711 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0815 18:05:42.825056   50711 command_runner.go:130] > # the hostname is being managed dynamically.
	I0815 18:05:42.825063   50711 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0815 18:05:42.825066   50711 command_runner.go:130] > # ]
	I0815 18:05:42.825074   50711 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0815 18:05:42.825081   50711 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0815 18:05:42.825089   50711 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0815 18:05:42.825094   50711 command_runner.go:130] > # Each entry in the table should follow the format:
	I0815 18:05:42.825100   50711 command_runner.go:130] > #
	I0815 18:05:42.825105   50711 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0815 18:05:42.825115   50711 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0815 18:05:42.825170   50711 command_runner.go:130] > # runtime_type = "oci"
	I0815 18:05:42.825183   50711 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0815 18:05:42.825190   50711 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0815 18:05:42.825195   50711 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0815 18:05:42.825205   50711 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0815 18:05:42.825212   50711 command_runner.go:130] > # monitor_env = []
	I0815 18:05:42.825216   50711 command_runner.go:130] > # privileged_without_host_devices = false
	I0815 18:05:42.825223   50711 command_runner.go:130] > # allowed_annotations = []
	I0815 18:05:42.825228   50711 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0815 18:05:42.825234   50711 command_runner.go:130] > # Where:
	I0815 18:05:42.825239   50711 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0815 18:05:42.825247   50711 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0815 18:05:42.825255   50711 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0815 18:05:42.825263   50711 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0815 18:05:42.825269   50711 command_runner.go:130] > #   in $PATH.
	I0815 18:05:42.825275   50711 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0815 18:05:42.825280   50711 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0815 18:05:42.825286   50711 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0815 18:05:42.825292   50711 command_runner.go:130] > #   state.
	I0815 18:05:42.825298   50711 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0815 18:05:42.825306   50711 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0815 18:05:42.825313   50711 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0815 18:05:42.825320   50711 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0815 18:05:42.825326   50711 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0815 18:05:42.825335   50711 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0815 18:05:42.825341   50711 command_runner.go:130] > #   The currently recognized values are:
	I0815 18:05:42.825354   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0815 18:05:42.825363   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0815 18:05:42.825369   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0815 18:05:42.825376   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0815 18:05:42.825385   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0815 18:05:42.825394   50711 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0815 18:05:42.825402   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0815 18:05:42.825409   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0815 18:05:42.825416   50711 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0815 18:05:42.825432   50711 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0815 18:05:42.825438   50711 command_runner.go:130] > #   deprecated option "conmon".
	I0815 18:05:42.825445   50711 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0815 18:05:42.825452   50711 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0815 18:05:42.825458   50711 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0815 18:05:42.825465   50711 command_runner.go:130] > #   should be moved to the container's cgroup
	I0815 18:05:42.825471   50711 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0815 18:05:42.825478   50711 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0815 18:05:42.825485   50711 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0815 18:05:42.825492   50711 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0815 18:05:42.825498   50711 command_runner.go:130] > #
	I0815 18:05:42.825502   50711 command_runner.go:130] > # Using the seccomp notifier feature:
	I0815 18:05:42.825508   50711 command_runner.go:130] > #
	I0815 18:05:42.825514   50711 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0815 18:05:42.825522   50711 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0815 18:05:42.825528   50711 command_runner.go:130] > #
	I0815 18:05:42.825533   50711 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0815 18:05:42.825541   50711 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0815 18:05:42.825544   50711 command_runner.go:130] > #
	I0815 18:05:42.825550   50711 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0815 18:05:42.825555   50711 command_runner.go:130] > # feature.
	I0815 18:05:42.825558   50711 command_runner.go:130] > #
	I0815 18:05:42.825567   50711 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0815 18:05:42.825621   50711 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0815 18:05:42.825642   50711 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0815 18:05:42.825655   50711 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0815 18:05:42.825667   50711 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0815 18:05:42.825675   50711 command_runner.go:130] > #
	I0815 18:05:42.825688   50711 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0815 18:05:42.825699   50711 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0815 18:05:42.825707   50711 command_runner.go:130] > #
	I0815 18:05:42.825718   50711 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0815 18:05:42.825730   50711 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0815 18:05:42.825737   50711 command_runner.go:130] > #
	I0815 18:05:42.825745   50711 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0815 18:05:42.825756   50711 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0815 18:05:42.825779   50711 command_runner.go:130] > # limitation.
	I0815 18:05:42.825792   50711 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0815 18:05:42.825802   50711 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0815 18:05:42.825811   50711 command_runner.go:130] > runtime_type = "oci"
	I0815 18:05:42.825818   50711 command_runner.go:130] > runtime_root = "/run/runc"
	I0815 18:05:42.825827   50711 command_runner.go:130] > runtime_config_path = ""
	I0815 18:05:42.825835   50711 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0815 18:05:42.825843   50711 command_runner.go:130] > monitor_cgroup = "pod"
	I0815 18:05:42.825849   50711 command_runner.go:130] > monitor_exec_cgroup = ""
	I0815 18:05:42.825857   50711 command_runner.go:130] > monitor_env = [
	I0815 18:05:42.825866   50711 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0815 18:05:42.825871   50711 command_runner.go:130] > ]
	I0815 18:05:42.825876   50711 command_runner.go:130] > privileged_without_host_devices = false
	I0815 18:05:42.825885   50711 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0815 18:05:42.825892   50711 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0815 18:05:42.825899   50711 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0815 18:05:42.825909   50711 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0815 18:05:42.825917   50711 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0815 18:05:42.825925   50711 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0815 18:05:42.825941   50711 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0815 18:05:42.825956   50711 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0815 18:05:42.825968   50711 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0815 18:05:42.825981   50711 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0815 18:05:42.825987   50711 command_runner.go:130] > # Example:
	I0815 18:05:42.825995   50711 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0815 18:05:42.826004   50711 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0815 18:05:42.826011   50711 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0815 18:05:42.826017   50711 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0815 18:05:42.826020   50711 command_runner.go:130] > # cpuset = 0
	I0815 18:05:42.826024   50711 command_runner.go:130] > # cpushares = "0-1"
	I0815 18:05:42.826027   50711 command_runner.go:130] > # Where:
	I0815 18:05:42.826031   50711 command_runner.go:130] > # The workload name is workload-type.
	I0815 18:05:42.826038   50711 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0815 18:05:42.826043   50711 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0815 18:05:42.826049   50711 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0815 18:05:42.826056   50711 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0815 18:05:42.826069   50711 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0815 18:05:42.826074   50711 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0815 18:05:42.826084   50711 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0815 18:05:42.826088   50711 command_runner.go:130] > # Default value is set to true
	I0815 18:05:42.826092   50711 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0815 18:05:42.826098   50711 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0815 18:05:42.826102   50711 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0815 18:05:42.826107   50711 command_runner.go:130] > # Default value is set to 'false'
	I0815 18:05:42.826111   50711 command_runner.go:130] > # disable_hostport_mapping = false
	I0815 18:05:42.826116   50711 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0815 18:05:42.826120   50711 command_runner.go:130] > #
	I0815 18:05:42.826129   50711 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0815 18:05:42.826137   50711 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0815 18:05:42.826143   50711 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0815 18:05:42.826149   50711 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0815 18:05:42.826155   50711 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0815 18:05:42.826159   50711 command_runner.go:130] > [crio.image]
	I0815 18:05:42.826169   50711 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0815 18:05:42.826176   50711 command_runner.go:130] > # default_transport = "docker://"
	I0815 18:05:42.826182   50711 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0815 18:05:42.826190   50711 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0815 18:05:42.826194   50711 command_runner.go:130] > # global_auth_file = ""
	I0815 18:05:42.826199   50711 command_runner.go:130] > # The image used to instantiate infra containers.
	I0815 18:05:42.826207   50711 command_runner.go:130] > # This option supports live configuration reload.
	I0815 18:05:42.826212   50711 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0815 18:05:42.826221   50711 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0815 18:05:42.826227   50711 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0815 18:05:42.826234   50711 command_runner.go:130] > # This option supports live configuration reload.
	I0815 18:05:42.826238   50711 command_runner.go:130] > # pause_image_auth_file = ""
	I0815 18:05:42.826246   50711 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0815 18:05:42.826253   50711 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0815 18:05:42.826261   50711 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0815 18:05:42.826266   50711 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0815 18:05:42.826273   50711 command_runner.go:130] > # pause_command = "/pause"
	I0815 18:05:42.826279   50711 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0815 18:05:42.826287   50711 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0815 18:05:42.826297   50711 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0815 18:05:42.826308   50711 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0815 18:05:42.826313   50711 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0815 18:05:42.826320   50711 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0815 18:05:42.826324   50711 command_runner.go:130] > # pinned_images = [
	I0815 18:05:42.826328   50711 command_runner.go:130] > # ]
	I0815 18:05:42.826334   50711 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0815 18:05:42.826342   50711 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0815 18:05:42.826348   50711 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0815 18:05:42.826356   50711 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0815 18:05:42.826361   50711 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0815 18:05:42.826366   50711 command_runner.go:130] > # signature_policy = ""
	I0815 18:05:42.826371   50711 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0815 18:05:42.826383   50711 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0815 18:05:42.826391   50711 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0815 18:05:42.826396   50711 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0815 18:05:42.826404   50711 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0815 18:05:42.826416   50711 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0815 18:05:42.826424   50711 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0815 18:05:42.826430   50711 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0815 18:05:42.826436   50711 command_runner.go:130] > # changing them here.
	I0815 18:05:42.826440   50711 command_runner.go:130] > # insecure_registries = [
	I0815 18:05:42.826443   50711 command_runner.go:130] > # ]
	I0815 18:05:42.826450   50711 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0815 18:05:42.826458   50711 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0815 18:05:42.826464   50711 command_runner.go:130] > # image_volumes = "mkdir"
	I0815 18:05:42.826475   50711 command_runner.go:130] > # Temporary directory to use for storing big files
	I0815 18:05:42.826482   50711 command_runner.go:130] > # big_files_temporary_dir = ""
	I0815 18:05:42.826491   50711 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0815 18:05:42.826496   50711 command_runner.go:130] > # CNI plugins.
	I0815 18:05:42.826502   50711 command_runner.go:130] > [crio.network]
	I0815 18:05:42.826508   50711 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0815 18:05:42.826514   50711 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0815 18:05:42.826518   50711 command_runner.go:130] > # cni_default_network = ""
	I0815 18:05:42.826523   50711 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0815 18:05:42.826530   50711 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0815 18:05:42.826540   50711 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0815 18:05:42.826546   50711 command_runner.go:130] > # plugin_dirs = [
	I0815 18:05:42.826550   50711 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0815 18:05:42.826560   50711 command_runner.go:130] > # ]
	I0815 18:05:42.826568   50711 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0815 18:05:42.826572   50711 command_runner.go:130] > [crio.metrics]
	I0815 18:05:42.826578   50711 command_runner.go:130] > # Globally enable or disable metrics support.
	I0815 18:05:42.826582   50711 command_runner.go:130] > enable_metrics = true
	I0815 18:05:42.826588   50711 command_runner.go:130] > # Specify enabled metrics collectors.
	I0815 18:05:42.826593   50711 command_runner.go:130] > # Per default all metrics are enabled.
	I0815 18:05:42.826599   50711 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0815 18:05:42.826607   50711 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0815 18:05:42.826615   50711 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0815 18:05:42.826619   50711 command_runner.go:130] > # metrics_collectors = [
	I0815 18:05:42.826624   50711 command_runner.go:130] > # 	"operations",
	I0815 18:05:42.826628   50711 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0815 18:05:42.826635   50711 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0815 18:05:42.826639   50711 command_runner.go:130] > # 	"operations_errors",
	I0815 18:05:42.826645   50711 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0815 18:05:42.826649   50711 command_runner.go:130] > # 	"image_pulls_by_name",
	I0815 18:05:42.826656   50711 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0815 18:05:42.826660   50711 command_runner.go:130] > # 	"image_pulls_failures",
	I0815 18:05:42.826664   50711 command_runner.go:130] > # 	"image_pulls_successes",
	I0815 18:05:42.826668   50711 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0815 18:05:42.826672   50711 command_runner.go:130] > # 	"image_layer_reuse",
	I0815 18:05:42.826676   50711 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0815 18:05:42.826680   50711 command_runner.go:130] > # 	"containers_oom_total",
	I0815 18:05:42.826684   50711 command_runner.go:130] > # 	"containers_oom",
	I0815 18:05:42.826688   50711 command_runner.go:130] > # 	"processes_defunct",
	I0815 18:05:42.826692   50711 command_runner.go:130] > # 	"operations_total",
	I0815 18:05:42.826696   50711 command_runner.go:130] > # 	"operations_latency_seconds",
	I0815 18:05:42.826700   50711 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0815 18:05:42.826706   50711 command_runner.go:130] > # 	"operations_errors_total",
	I0815 18:05:42.826710   50711 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0815 18:05:42.826721   50711 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0815 18:05:42.826727   50711 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0815 18:05:42.826740   50711 command_runner.go:130] > # 	"image_pulls_success_total",
	I0815 18:05:42.826747   50711 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0815 18:05:42.826751   50711 command_runner.go:130] > # 	"containers_oom_count_total",
	I0815 18:05:42.826758   50711 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0815 18:05:42.826762   50711 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0815 18:05:42.826766   50711 command_runner.go:130] > # ]
	I0815 18:05:42.826771   50711 command_runner.go:130] > # The port on which the metrics server will listen.
	I0815 18:05:42.826777   50711 command_runner.go:130] > # metrics_port = 9090
	I0815 18:05:42.826783   50711 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0815 18:05:42.826787   50711 command_runner.go:130] > # metrics_socket = ""
	I0815 18:05:42.826792   50711 command_runner.go:130] > # The certificate for the secure metrics server.
	I0815 18:05:42.826800   50711 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0815 18:05:42.826806   50711 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0815 18:05:42.826813   50711 command_runner.go:130] > # certificate on any modification event.
	I0815 18:05:42.826816   50711 command_runner.go:130] > # metrics_cert = ""
	I0815 18:05:42.826821   50711 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0815 18:05:42.826827   50711 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0815 18:05:42.826831   50711 command_runner.go:130] > # metrics_key = ""
	I0815 18:05:42.826837   50711 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0815 18:05:42.826842   50711 command_runner.go:130] > [crio.tracing]
	I0815 18:05:42.826847   50711 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0815 18:05:42.826852   50711 command_runner.go:130] > # enable_tracing = false
	I0815 18:05:42.826857   50711 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0815 18:05:42.826862   50711 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0815 18:05:42.826869   50711 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0815 18:05:42.826875   50711 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0815 18:05:42.826879   50711 command_runner.go:130] > # CRI-O NRI configuration.
	I0815 18:05:42.826884   50711 command_runner.go:130] > [crio.nri]
	I0815 18:05:42.826888   50711 command_runner.go:130] > # Globally enable or disable NRI.
	I0815 18:05:42.826892   50711 command_runner.go:130] > # enable_nri = false
	I0815 18:05:42.826896   50711 command_runner.go:130] > # NRI socket to listen on.
	I0815 18:05:42.826901   50711 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0815 18:05:42.826905   50711 command_runner.go:130] > # NRI plugin directory to use.
	I0815 18:05:42.826912   50711 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0815 18:05:42.826916   50711 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0815 18:05:42.826923   50711 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0815 18:05:42.826933   50711 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0815 18:05:42.826939   50711 command_runner.go:130] > # nri_disable_connections = false
	I0815 18:05:42.826944   50711 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0815 18:05:42.826950   50711 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0815 18:05:42.826955   50711 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0815 18:05:42.826966   50711 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0815 18:05:42.826973   50711 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0815 18:05:42.826977   50711 command_runner.go:130] > [crio.stats]
	I0815 18:05:42.826984   50711 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0815 18:05:42.826989   50711 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0815 18:05:42.826995   50711 command_runner.go:130] > # stats_collection_period = 0
	I0815 18:05:42.827026   50711 command_runner.go:130] ! time="2024-08-15 18:05:42.792146362Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0815 18:05:42.827039   50711 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0815 18:05:42.827210   50711 cni.go:84] Creating CNI manager for ""
	I0815 18:05:42.827225   50711 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 18:05:42.827236   50711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:05:42.827257   50711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.73 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-769827 NodeName:multinode-769827 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:05:42.827390   50711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-769827"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:05:42.827460   50711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:05:42.837909   50711 command_runner.go:130] > kubeadm
	I0815 18:05:42.837933   50711 command_runner.go:130] > kubectl
	I0815 18:05:42.837940   50711 command_runner.go:130] > kubelet
	I0815 18:05:42.838002   50711 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:05:42.838055   50711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:05:42.847779   50711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0815 18:05:42.864141   50711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:05:42.880904   50711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0815 18:05:42.897589   50711 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I0815 18:05:42.901265   50711 command_runner.go:130] > 192.168.39.73	control-plane.minikube.internal
	I0815 18:05:42.901371   50711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:05:43.037297   50711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:05:43.051889   50711 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827 for IP: 192.168.39.73
	I0815 18:05:43.051914   50711 certs.go:194] generating shared ca certs ...
	I0815 18:05:43.051929   50711 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:05:43.052087   50711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:05:43.052131   50711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:05:43.052142   50711 certs.go:256] generating profile certs ...
	I0815 18:05:43.052217   50711 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/client.key
	I0815 18:05:43.052273   50711 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/apiserver.key.f6f8ed09
	I0815 18:05:43.052309   50711 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/proxy-client.key
	I0815 18:05:43.052320   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 18:05:43.052334   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 18:05:43.052359   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 18:05:43.052372   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 18:05:43.052383   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 18:05:43.052397   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 18:05:43.052409   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 18:05:43.052418   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 18:05:43.052465   50711 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:05:43.052522   50711 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:05:43.052534   50711 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:05:43.052556   50711 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:05:43.052580   50711 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:05:43.052603   50711 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:05:43.052651   50711 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:05:43.052683   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:05:43.052696   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem -> /usr/share/ca-certificates/20219.pem
	I0815 18:05:43.052708   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /usr/share/ca-certificates/202192.pem
	I0815 18:05:43.053263   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:05:43.078370   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:05:43.101934   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:05:43.125482   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:05:43.149530   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 18:05:43.173909   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:05:43.198716   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:05:43.222387   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:05:43.246363   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:05:43.271418   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:05:43.295460   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:05:43.318095   50711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:05:43.334196   50711 ssh_runner.go:195] Run: openssl version
	I0815 18:05:43.339708   50711 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0815 18:05:43.339932   50711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:05:43.350623   50711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:05:43.354876   50711 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:05:43.354987   50711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:05:43.355036   50711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:05:43.360400   50711 command_runner.go:130] > b5213941
	I0815 18:05:43.360459   50711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:05:43.369911   50711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:05:43.381189   50711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:05:43.385543   50711 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:05:43.385568   50711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:05:43.385606   50711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:05:43.391173   50711 command_runner.go:130] > 51391683
	I0815 18:05:43.391237   50711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:05:43.400738   50711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:05:43.411924   50711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:05:43.416455   50711 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:05:43.416507   50711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:05:43.416556   50711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:05:43.422079   50711 command_runner.go:130] > 3ec20f2e
	I0815 18:05:43.422245   50711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:05:43.431802   50711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:05:43.436161   50711 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:05:43.436185   50711 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0815 18:05:43.436191   50711 command_runner.go:130] > Device: 253,1	Inode: 1056278     Links: 1
	I0815 18:05:43.436197   50711 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0815 18:05:43.436204   50711 command_runner.go:130] > Access: 2024-08-15 17:58:53.814641207 +0000
	I0815 18:05:43.436209   50711 command_runner.go:130] > Modify: 2024-08-15 17:58:53.814641207 +0000
	I0815 18:05:43.436214   50711 command_runner.go:130] > Change: 2024-08-15 17:58:53.814641207 +0000
	I0815 18:05:43.436219   50711 command_runner.go:130] >  Birth: 2024-08-15 17:58:53.814641207 +0000
	I0815 18:05:43.436263   50711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:05:43.441545   50711 command_runner.go:130] > Certificate will not expire
	I0815 18:05:43.441855   50711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:05:43.447044   50711 command_runner.go:130] > Certificate will not expire
	I0815 18:05:43.447088   50711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:05:43.453109   50711 command_runner.go:130] > Certificate will not expire
	I0815 18:05:43.453214   50711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:05:43.458506   50711 command_runner.go:130] > Certificate will not expire
	I0815 18:05:43.458737   50711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:05:43.464216   50711 command_runner.go:130] > Certificate will not expire
	I0815 18:05:43.464264   50711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:05:43.469715   50711 command_runner.go:130] > Certificate will not expire
	I0815 18:05:43.469830   50711 kubeadm.go:392] StartCluster: {Name:multinode-769827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-769827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.143 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:05:43.469935   50711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:05:43.469980   50711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:05:43.505575   50711 command_runner.go:130] > 65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f
	I0815 18:05:43.505606   50711 command_runner.go:130] > 6badbf1f14b9ca4398e327977be81890eb1c1984b86cde46404b7619f7efe3f0
	I0815 18:05:43.505617   50711 command_runner.go:130] > 29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77
	I0815 18:05:43.505626   50711 command_runner.go:130] > fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee
	I0815 18:05:43.505632   50711 command_runner.go:130] > 99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0
	I0815 18:05:43.505637   50711 command_runner.go:130] > 006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469
	I0815 18:05:43.505643   50711 command_runner.go:130] > 75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed
	I0815 18:05:43.505725   50711 command_runner.go:130] > 77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a
	I0815 18:05:43.507200   50711 cri.go:89] found id: "65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f"
	I0815 18:05:43.507216   50711 cri.go:89] found id: "6badbf1f14b9ca4398e327977be81890eb1c1984b86cde46404b7619f7efe3f0"
	I0815 18:05:43.507220   50711 cri.go:89] found id: "29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77"
	I0815 18:05:43.507223   50711 cri.go:89] found id: "fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee"
	I0815 18:05:43.507225   50711 cri.go:89] found id: "99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0"
	I0815 18:05:43.507228   50711 cri.go:89] found id: "006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469"
	I0815 18:05:43.507231   50711 cri.go:89] found id: "75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed"
	I0815 18:05:43.507233   50711 cri.go:89] found id: "77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a"
	I0815 18:05:43.507236   50711 cri.go:89] found id: ""
	I0815 18:05:43.507278   50711 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.439968708Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745249439946421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8f13695-8a76-4849-8ef4-c2bbfd2574db name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.440591279Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7ca0af32-fea5-40c7-8f57-ca131945c3ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.440719361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7ca0af32-fea5-40c7-8f57-ca131945c3ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.441038618Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bcbc2400a6bc10fb610629028a1f580df8eacbef273dc1e2887b8bd355ec1dc,PodSandboxId:9654f0dfe2b2a329465e665bdd9df6552b82358853f9d1e0f4e1981499d6da86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723745183912408214,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c133435cb4e31b26fc4a909ae3c3199af3cccda5810b2ca8937b4860708ebb3b,PodSandboxId:e840391611d579fe922b750bbd748741c31482297e55bfc9dfd0911959b2fac2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723745150315984903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:051882e6acf4abe9e919becabd10a96d0f189085f5390fdb0e8e12113ed62a81,PodSandboxId:7a0cec1e6c28dcb19cd8b1da08bd7b126ab05e1f6b64db8c986daf7812f0dd68,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723745150371025382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73e37bc2dbdb91ad306b4047d3db2d22e0197f1c9778b06d4b967201c83286a,PodSandboxId:545bd160e23a18d5731c8c633c7ae1688f55e9e12629e86d4dc0e4dd9cc185a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723745150230337519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd5e3abd823c15ad5896347396d30ad6519b1209e5b4c1a886706d0489ed082,PodSandboxId:90f33d9a591545ba2d39caa46f713554d1b2900c512d3565ef6fe47b8fde1b63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723745150199396711,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9907340fbd8cc209ebb3f5fa117f1000cfd6cf09830b4e6100a0a08d0015716,PodSandboxId:28a7289752b82ecc293f6fd997b71c299c83c818769b3c14a0838b8b9d22da8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723745146331754623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:704afc72580d06b4fe0dccbfd7555c08d6f40ffe914a25d4a364c4b84ce5ccb5,PodSandboxId:d4e1240abe3624ac34dd05e70a13e47ddae0933c71b4fd8caa742d49ed62c63c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723745146348109190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c69af92d63ad4a21ab07081944894a987e424ff5f5b2023f89830f44a6cd7d6,PodSandboxId:1a75875531d2399ccd8fcbc6d0ca93411197449248ca0cb5bbfc5b67b3454bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723745146315478663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8123420b5cbe4bb676a5a956fe125b3e54508669d4d317a2985ab9174ee33dfc,PodSandboxId:9d2d5e1d08314d5d95c09908a2ee37d9d3a5b1655ad1371c8a541779e783cb32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723745146276141066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d22d50a70ac6f51b2ef8ac44cd4dbf68940601ef90765ab9de8e21a11150e97,PodSandboxId:3404e4fc3eb6d4d9a1cbfb7f2711143d681d2a4e4b10789bd040c634792ce33e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723744819620335698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6badbf1f14b9ca4398e327977be81890eb1c1984b86cde46404b7619f7efe3f0,PodSandboxId:a2d11e45774519fab256b4fbc1a928e4a3707d7a808fd4ee3b6f3ccb789a2b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723744763874119067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f,PodSandboxId:f59a1504b34b8697570eed7aecc8fdbdbb66072afde97e2472575ecafeaad732,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723744763894025525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77,PodSandboxId:09bb5321f840662e81b7010b62d9865f05b0a4d1f63eba891803debbcf8730f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723744752109328605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee,PodSandboxId:89e5bd0b5f5109af0f557ff83b42b4f63edb836582e4cbf67efcc233c3734ce2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723744748053403168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469,PodSandboxId:ba409cd10440ace2c71920dfb0f78837a7d4f4341912a854ba354d08ebb1d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723744737137852356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0,PodSandboxId:640ede242ce296a21c033c6132a422480853c49db339d1819f7a1b3ffc3622bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723744737149133677,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:
map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed,PodSandboxId:879bc13e372e6829d7418e5d905d3e17c0e2da3152713d4f5eb29f20edd7a18a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723744737046724472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a,PodSandboxId:40647a4b20092c0585cff98c0321844713b1037dd6991ae0318706a5a7e14751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723744736988914968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7ca0af32-fea5-40c7-8f57-ca131945c3ae name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.481126095Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=096730ab-7577-4720-8539-b94accfc8f76 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.481217227Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=096730ab-7577-4720-8539-b94accfc8f76 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.482508838Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1c4a8a1-2872-4cbe-a514-c67ed048cad5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.483064305Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745249483041412,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1c4a8a1-2872-4cbe-a514-c67ed048cad5 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.483672984Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f8050419-777e-4273-9d2c-37d9632dd400 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.483748306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f8050419-777e-4273-9d2c-37d9632dd400 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.484075472Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bcbc2400a6bc10fb610629028a1f580df8eacbef273dc1e2887b8bd355ec1dc,PodSandboxId:9654f0dfe2b2a329465e665bdd9df6552b82358853f9d1e0f4e1981499d6da86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723745183912408214,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c133435cb4e31b26fc4a909ae3c3199af3cccda5810b2ca8937b4860708ebb3b,PodSandboxId:e840391611d579fe922b750bbd748741c31482297e55bfc9dfd0911959b2fac2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723745150315984903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:051882e6acf4abe9e919becabd10a96d0f189085f5390fdb0e8e12113ed62a81,PodSandboxId:7a0cec1e6c28dcb19cd8b1da08bd7b126ab05e1f6b64db8c986daf7812f0dd68,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723745150371025382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73e37bc2dbdb91ad306b4047d3db2d22e0197f1c9778b06d4b967201c83286a,PodSandboxId:545bd160e23a18d5731c8c633c7ae1688f55e9e12629e86d4dc0e4dd9cc185a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723745150230337519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd5e3abd823c15ad5896347396d30ad6519b1209e5b4c1a886706d0489ed082,PodSandboxId:90f33d9a591545ba2d39caa46f713554d1b2900c512d3565ef6fe47b8fde1b63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723745150199396711,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9907340fbd8cc209ebb3f5fa117f1000cfd6cf09830b4e6100a0a08d0015716,PodSandboxId:28a7289752b82ecc293f6fd997b71c299c83c818769b3c14a0838b8b9d22da8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723745146331754623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:704afc72580d06b4fe0dccbfd7555c08d6f40ffe914a25d4a364c4b84ce5ccb5,PodSandboxId:d4e1240abe3624ac34dd05e70a13e47ddae0933c71b4fd8caa742d49ed62c63c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723745146348109190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c69af92d63ad4a21ab07081944894a987e424ff5f5b2023f89830f44a6cd7d6,PodSandboxId:1a75875531d2399ccd8fcbc6d0ca93411197449248ca0cb5bbfc5b67b3454bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723745146315478663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8123420b5cbe4bb676a5a956fe125b3e54508669d4d317a2985ab9174ee33dfc,PodSandboxId:9d2d5e1d08314d5d95c09908a2ee37d9d3a5b1655ad1371c8a541779e783cb32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723745146276141066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d22d50a70ac6f51b2ef8ac44cd4dbf68940601ef90765ab9de8e21a11150e97,PodSandboxId:3404e4fc3eb6d4d9a1cbfb7f2711143d681d2a4e4b10789bd040c634792ce33e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723744819620335698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6badbf1f14b9ca4398e327977be81890eb1c1984b86cde46404b7619f7efe3f0,PodSandboxId:a2d11e45774519fab256b4fbc1a928e4a3707d7a808fd4ee3b6f3ccb789a2b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723744763874119067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f,PodSandboxId:f59a1504b34b8697570eed7aecc8fdbdbb66072afde97e2472575ecafeaad732,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723744763894025525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77,PodSandboxId:09bb5321f840662e81b7010b62d9865f05b0a4d1f63eba891803debbcf8730f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723744752109328605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee,PodSandboxId:89e5bd0b5f5109af0f557ff83b42b4f63edb836582e4cbf67efcc233c3734ce2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723744748053403168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469,PodSandboxId:ba409cd10440ace2c71920dfb0f78837a7d4f4341912a854ba354d08ebb1d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723744737137852356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0,PodSandboxId:640ede242ce296a21c033c6132a422480853c49db339d1819f7a1b3ffc3622bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723744737149133677,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:
map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed,PodSandboxId:879bc13e372e6829d7418e5d905d3e17c0e2da3152713d4f5eb29f20edd7a18a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723744737046724472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a,PodSandboxId:40647a4b20092c0585cff98c0321844713b1037dd6991ae0318706a5a7e14751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723744736988914968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f8050419-777e-4273-9d2c-37d9632dd400 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.525319039Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37bd7c3f-a15b-4916-b13a-51104b52e9f4 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.525411870Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37bd7c3f-a15b-4916-b13a-51104b52e9f4 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.526717600Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b556272-85de-46aa-ab09-db1e3cf39e25 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.527138134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745249527115959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b556272-85de-46aa-ab09-db1e3cf39e25 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.527769136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=76a24618-c018-4d98-a978-acf48542aea5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.527847065Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=76a24618-c018-4d98-a978-acf48542aea5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.528173599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bcbc2400a6bc10fb610629028a1f580df8eacbef273dc1e2887b8bd355ec1dc,PodSandboxId:9654f0dfe2b2a329465e665bdd9df6552b82358853f9d1e0f4e1981499d6da86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723745183912408214,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c133435cb4e31b26fc4a909ae3c3199af3cccda5810b2ca8937b4860708ebb3b,PodSandboxId:e840391611d579fe922b750bbd748741c31482297e55bfc9dfd0911959b2fac2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723745150315984903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:051882e6acf4abe9e919becabd10a96d0f189085f5390fdb0e8e12113ed62a81,PodSandboxId:7a0cec1e6c28dcb19cd8b1da08bd7b126ab05e1f6b64db8c986daf7812f0dd68,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723745150371025382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73e37bc2dbdb91ad306b4047d3db2d22e0197f1c9778b06d4b967201c83286a,PodSandboxId:545bd160e23a18d5731c8c633c7ae1688f55e9e12629e86d4dc0e4dd9cc185a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723745150230337519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd5e3abd823c15ad5896347396d30ad6519b1209e5b4c1a886706d0489ed082,PodSandboxId:90f33d9a591545ba2d39caa46f713554d1b2900c512d3565ef6fe47b8fde1b63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723745150199396711,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9907340fbd8cc209ebb3f5fa117f1000cfd6cf09830b4e6100a0a08d0015716,PodSandboxId:28a7289752b82ecc293f6fd997b71c299c83c818769b3c14a0838b8b9d22da8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723745146331754623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:704afc72580d06b4fe0dccbfd7555c08d6f40ffe914a25d4a364c4b84ce5ccb5,PodSandboxId:d4e1240abe3624ac34dd05e70a13e47ddae0933c71b4fd8caa742d49ed62c63c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723745146348109190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c69af92d63ad4a21ab07081944894a987e424ff5f5b2023f89830f44a6cd7d6,PodSandboxId:1a75875531d2399ccd8fcbc6d0ca93411197449248ca0cb5bbfc5b67b3454bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723745146315478663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8123420b5cbe4bb676a5a956fe125b3e54508669d4d317a2985ab9174ee33dfc,PodSandboxId:9d2d5e1d08314d5d95c09908a2ee37d9d3a5b1655ad1371c8a541779e783cb32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723745146276141066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d22d50a70ac6f51b2ef8ac44cd4dbf68940601ef90765ab9de8e21a11150e97,PodSandboxId:3404e4fc3eb6d4d9a1cbfb7f2711143d681d2a4e4b10789bd040c634792ce33e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723744819620335698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6badbf1f14b9ca4398e327977be81890eb1c1984b86cde46404b7619f7efe3f0,PodSandboxId:a2d11e45774519fab256b4fbc1a928e4a3707d7a808fd4ee3b6f3ccb789a2b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723744763874119067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f,PodSandboxId:f59a1504b34b8697570eed7aecc8fdbdbb66072afde97e2472575ecafeaad732,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723744763894025525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77,PodSandboxId:09bb5321f840662e81b7010b62d9865f05b0a4d1f63eba891803debbcf8730f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723744752109328605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee,PodSandboxId:89e5bd0b5f5109af0f557ff83b42b4f63edb836582e4cbf67efcc233c3734ce2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723744748053403168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469,PodSandboxId:ba409cd10440ace2c71920dfb0f78837a7d4f4341912a854ba354d08ebb1d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723744737137852356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0,PodSandboxId:640ede242ce296a21c033c6132a422480853c49db339d1819f7a1b3ffc3622bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723744737149133677,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:
map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed,PodSandboxId:879bc13e372e6829d7418e5d905d3e17c0e2da3152713d4f5eb29f20edd7a18a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723744737046724472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a,PodSandboxId:40647a4b20092c0585cff98c0321844713b1037dd6991ae0318706a5a7e14751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723744736988914968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=76a24618-c018-4d98-a978-acf48542aea5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.569220734Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d2f1679-b89f-40e1-8c5a-3ed90a83264e name=/runtime.v1.RuntimeService/Version
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.569311472Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d2f1679-b89f-40e1-8c5a-3ed90a83264e name=/runtime.v1.RuntimeService/Version
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.570172047Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b84d5461-7f0d-4c39-8cb4-7170131ce3b9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.570563878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745249570545792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b84d5461-7f0d-4c39-8cb4-7170131ce3b9 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.571267110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=792a0564-4584-4e35-8cd3-a2fbb3aafaa7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.571333304Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=792a0564-4584-4e35-8cd3-a2fbb3aafaa7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:07:29 multinode-769827 crio[2772]: time="2024-08-15 18:07:29.571717431Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bcbc2400a6bc10fb610629028a1f580df8eacbef273dc1e2887b8bd355ec1dc,PodSandboxId:9654f0dfe2b2a329465e665bdd9df6552b82358853f9d1e0f4e1981499d6da86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723745183912408214,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c133435cb4e31b26fc4a909ae3c3199af3cccda5810b2ca8937b4860708ebb3b,PodSandboxId:e840391611d579fe922b750bbd748741c31482297e55bfc9dfd0911959b2fac2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723745150315984903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:051882e6acf4abe9e919becabd10a96d0f189085f5390fdb0e8e12113ed62a81,PodSandboxId:7a0cec1e6c28dcb19cd8b1da08bd7b126ab05e1f6b64db8c986daf7812f0dd68,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723745150371025382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73e37bc2dbdb91ad306b4047d3db2d22e0197f1c9778b06d4b967201c83286a,PodSandboxId:545bd160e23a18d5731c8c633c7ae1688f55e9e12629e86d4dc0e4dd9cc185a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723745150230337519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd5e3abd823c15ad5896347396d30ad6519b1209e5b4c1a886706d0489ed082,PodSandboxId:90f33d9a591545ba2d39caa46f713554d1b2900c512d3565ef6fe47b8fde1b63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723745150199396711,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9907340fbd8cc209ebb3f5fa117f1000cfd6cf09830b4e6100a0a08d0015716,PodSandboxId:28a7289752b82ecc293f6fd997b71c299c83c818769b3c14a0838b8b9d22da8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723745146331754623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:704afc72580d06b4fe0dccbfd7555c08d6f40ffe914a25d4a364c4b84ce5ccb5,PodSandboxId:d4e1240abe3624ac34dd05e70a13e47ddae0933c71b4fd8caa742d49ed62c63c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723745146348109190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c69af92d63ad4a21ab07081944894a987e424ff5f5b2023f89830f44a6cd7d6,PodSandboxId:1a75875531d2399ccd8fcbc6d0ca93411197449248ca0cb5bbfc5b67b3454bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723745146315478663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8123420b5cbe4bb676a5a956fe125b3e54508669d4d317a2985ab9174ee33dfc,PodSandboxId:9d2d5e1d08314d5d95c09908a2ee37d9d3a5b1655ad1371c8a541779e783cb32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723745146276141066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d22d50a70ac6f51b2ef8ac44cd4dbf68940601ef90765ab9de8e21a11150e97,PodSandboxId:3404e4fc3eb6d4d9a1cbfb7f2711143d681d2a4e4b10789bd040c634792ce33e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723744819620335698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6badbf1f14b9ca4398e327977be81890eb1c1984b86cde46404b7619f7efe3f0,PodSandboxId:a2d11e45774519fab256b4fbc1a928e4a3707d7a808fd4ee3b6f3ccb789a2b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723744763874119067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f,PodSandboxId:f59a1504b34b8697570eed7aecc8fdbdbb66072afde97e2472575ecafeaad732,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723744763894025525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77,PodSandboxId:09bb5321f840662e81b7010b62d9865f05b0a4d1f63eba891803debbcf8730f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723744752109328605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee,PodSandboxId:89e5bd0b5f5109af0f557ff83b42b4f63edb836582e4cbf67efcc233c3734ce2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723744748053403168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469,PodSandboxId:ba409cd10440ace2c71920dfb0f78837a7d4f4341912a854ba354d08ebb1d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723744737137852356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0,PodSandboxId:640ede242ce296a21c033c6132a422480853c49db339d1819f7a1b3ffc3622bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723744737149133677,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:
map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed,PodSandboxId:879bc13e372e6829d7418e5d905d3e17c0e2da3152713d4f5eb29f20edd7a18a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723744737046724472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a,PodSandboxId:40647a4b20092c0585cff98c0321844713b1037dd6991ae0318706a5a7e14751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723744736988914968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=792a0564-4584-4e35-8cd3-a2fbb3aafaa7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6bcbc2400a6bc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   9654f0dfe2b2a       busybox-7dff88458-jrvlv
	051882e6acf4a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   7a0cec1e6c28d       coredns-6f6b679f8f-d5zq9
	c133435cb4e31       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   e840391611d57       kindnet-wt8bf
	d73e37bc2dbdb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   545bd160e23a1       storage-provisioner
	5dd5e3abd823c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      About a minute ago   Running             kube-proxy                1                   90f33d9a59154       kube-proxy-hh9zj
	704afc72580d0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   d4e1240abe362       etcd-multinode-769827
	f9907340fbd8c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      About a minute ago   Running             kube-scheduler            1                   28a7289752b82       kube-scheduler-multinode-769827
	0c69af92d63ad       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      About a minute ago   Running             kube-apiserver            1                   1a75875531d23       kube-apiserver-multinode-769827
	8123420b5cbe4       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      About a minute ago   Running             kube-controller-manager   1                   9d2d5e1d08314       kube-controller-manager-multinode-769827
	4d22d50a70ac6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   3404e4fc3eb6d       busybox-7dff88458-jrvlv
	65ef23da92ccf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      8 minutes ago        Exited              coredns                   0                   f59a1504b34b8       coredns-6f6b679f8f-d5zq9
	6badbf1f14b9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   a2d11e4577451       storage-provisioner
	29c39838952dc       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    8 minutes ago        Exited              kindnet-cni               0                   09bb5321f8406       kindnet-wt8bf
	fbe2ea6e1d672       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      8 minutes ago        Exited              kube-proxy                0                   89e5bd0b5f510       kube-proxy-hh9zj
	99b3bcdf65e5f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      8 minutes ago        Exited              kube-apiserver            0                   640ede242ce29       kube-apiserver-multinode-769827
	006f9c6202ca9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   ba409cd10440a       etcd-multinode-769827
	75cd818d80b96       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      8 minutes ago        Exited              kube-controller-manager   0                   879bc13e372e6       kube-controller-manager-multinode-769827
	77661e4bf365e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      8 minutes ago        Exited              kube-scheduler            0                   40647a4b20092       kube-scheduler-multinode-769827
	
	
	==> coredns [051882e6acf4abe9e919becabd10a96d0f189085f5390fdb0e8e12113ed62a81] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57737 - 30846 "HINFO IN 613272218464715039.6391360153872604405. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015756917s
	
	
	==> coredns [65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f] <==
	[INFO] 10.244.1.2:36465 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001479288s
	[INFO] 10.244.1.2:59000 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106516s
	[INFO] 10.244.1.2:57277 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061901s
	[INFO] 10.244.1.2:56816 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001158854s
	[INFO] 10.244.1.2:60901 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128073s
	[INFO] 10.244.1.2:46179 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056327s
	[INFO] 10.244.1.2:52150 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051547s
	[INFO] 10.244.0.3:53403 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096801s
	[INFO] 10.244.0.3:59707 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000039357s
	[INFO] 10.244.0.3:40454 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051731s
	[INFO] 10.244.0.3:39818 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000029676s
	[INFO] 10.244.1.2:55990 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122435s
	[INFO] 10.244.1.2:33756 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080074s
	[INFO] 10.244.1.2:52274 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103858s
	[INFO] 10.244.1.2:57630 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080723s
	[INFO] 10.244.0.3:58766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103835s
	[INFO] 10.244.0.3:37671 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085473s
	[INFO] 10.244.0.3:42401 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00033564s
	[INFO] 10.244.0.3:39167 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188704s
	[INFO] 10.244.1.2:34856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139374s
	[INFO] 10.244.1.2:41841 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108964s
	[INFO] 10.244.1.2:56881 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107879s
	[INFO] 10.244.1.2:53178 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000161933s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-769827
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-769827
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=multinode-769827
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T17_59_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:58:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-769827
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:07:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 18:05:49 +0000   Thu, 15 Aug 2024 17:58:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 18:05:49 +0000   Thu, 15 Aug 2024 17:58:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 18:05:49 +0000   Thu, 15 Aug 2024 17:58:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 18:05:49 +0000   Thu, 15 Aug 2024 17:59:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    multinode-769827
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e15ffbed288486092b0fdf6bedd0076
	  System UUID:                4e15ffbe-d288-4860-92b0-fdf6bedd0076
	  Boot ID:                    40b4a32b-9d7d-4a8d-9166-0a48755633cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jrvlv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 coredns-6f6b679f8f-d5zq9                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m22s
	  kube-system                 etcd-multinode-769827                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m27s
	  kube-system                 kindnet-wt8bf                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m23s
	  kube-system                 kube-apiserver-multinode-769827             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-controller-manager-multinode-769827    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-proxy-hh9zj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  kube-system                 kube-scheduler-multinode-769827             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 8m21s                kube-proxy       
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  NodeHasSufficientPID     8m27s                kubelet          Node multinode-769827 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m27s                kubelet          Node multinode-769827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s                kubelet          Node multinode-769827 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 8m27s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m23s                node-controller  Node multinode-769827 event: Registered Node multinode-769827 in Controller
	  Normal  NodeReady                8m6s                 kubelet          Node multinode-769827 status is now: NodeReady
	  Normal  Starting                 104s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s (x8 over 104s)  kubelet          Node multinode-769827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s (x8 over 104s)  kubelet          Node multinode-769827 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s (x7 over 104s)  kubelet          Node multinode-769827 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  104s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                  node-controller  Node multinode-769827 event: Registered Node multinode-769827 in Controller
	
	
	Name:               multinode-769827-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-769827-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=multinode-769827
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T18_06_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 18:06:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-769827-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:07:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 18:07:00 +0000   Thu, 15 Aug 2024 18:06:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 18:07:00 +0000   Thu, 15 Aug 2024 18:06:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 18:07:00 +0000   Thu, 15 Aug 2024 18:06:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 18:07:00 +0000   Thu, 15 Aug 2024 18:06:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    multinode-769827-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e711d8d406f4f60b3dcd5552dea75d6
	  System UUID:                5e711d8d-406f-4f60-b3dc-d5552dea75d6
	  Boot ID:                    0a046e00-c2cc-43e7-a84d-b460d2c4f4b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7pwdg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kindnet-b7s6v              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m34s
	  kube-system                 kube-proxy-cwn29           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m30s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m34s (x2 over 7m35s)  kubelet     Node multinode-769827-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m34s (x2 over 7m35s)  kubelet     Node multinode-769827-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m34s (x2 over 7m35s)  kubelet     Node multinode-769827-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m34s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m15s                  kubelet     Node multinode-769827-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  59s (x2 over 59s)      kubelet     Node multinode-769827-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x2 over 59s)      kubelet     Node multinode-769827-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x2 over 59s)      kubelet     Node multinode-769827-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-769827-m02 status is now: NodeReady
	
	
	Name:               multinode-769827-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-769827-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=multinode-769827
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T18_07_08_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 18:07:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-769827-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:07:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 18:07:26 +0000   Thu, 15 Aug 2024 18:07:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 18:07:26 +0000   Thu, 15 Aug 2024 18:07:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 18:07:26 +0000   Thu, 15 Aug 2024 18:07:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 18:07:26 +0000   Thu, 15 Aug 2024 18:07:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.143
	  Hostname:    multinode-769827-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 79d0ef06a66a45998966a9993051ddb5
	  System UUID:                79d0ef06-a66a-4599-8966-a9993051ddb5
	  Boot ID:                    22b88b8c-e544-4a1c-8c92-21723a9eddb3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bbf9m       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m39s
	  kube-system                 kube-proxy-4lmfs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m45s                  kube-proxy       
	  Normal  Starting                 6m35s                  kube-proxy       
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m39s (x2 over 6m39s)  kubelet          Node multinode-769827-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m39s (x2 over 6m39s)  kubelet          Node multinode-769827-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m39s (x2 over 6m39s)  kubelet          Node multinode-769827-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m19s                  kubelet          Node multinode-769827-m03 status is now: NodeReady
	  Normal  Starting                 5m50s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet          Node multinode-769827-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet          Node multinode-769827-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet          Node multinode-769827-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m30s                  kubelet          Node multinode-769827-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22s (x2 over 22s)      kubelet          Node multinode-769827-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x2 over 22s)      kubelet          Node multinode-769827-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x2 over 22s)      kubelet          Node multinode-769827-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17s                    node-controller  Node multinode-769827-m03 event: Registered Node multinode-769827-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-769827-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.068024] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.214059] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.133232] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.292170] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +3.955161] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +3.828498] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.064114] kauditd_printk_skb: 158 callbacks suppressed
	[Aug15 17:59] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +0.093748] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.462854] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.133574] systemd-fstab-generator[1336]: Ignoring "noauto" option for root device
	[  +5.262886] kauditd_printk_skb: 59 callbacks suppressed
	[Aug15 18:00] kauditd_printk_skb: 12 callbacks suppressed
	[Aug15 18:05] systemd-fstab-generator[2690]: Ignoring "noauto" option for root device
	[  +0.155129] systemd-fstab-generator[2702]: Ignoring "noauto" option for root device
	[  +0.168418] systemd-fstab-generator[2716]: Ignoring "noauto" option for root device
	[  +0.141315] systemd-fstab-generator[2728]: Ignoring "noauto" option for root device
	[  +0.291455] systemd-fstab-generator[2756]: Ignoring "noauto" option for root device
	[  +5.500315] systemd-fstab-generator[2856]: Ignoring "noauto" option for root device
	[  +0.083284] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.429177] systemd-fstab-generator[2980]: Ignoring "noauto" option for root device
	[  +4.624578] kauditd_printk_skb: 74 callbacks suppressed
	[  +7.812681] kauditd_printk_skb: 34 callbacks suppressed
	[Aug15 18:06] systemd-fstab-generator[3819]: Ignoring "noauto" option for root device
	[ +18.146233] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469] <==
	{"level":"info","ts":"2024-08-15T17:59:55.096789Z","caller":"traceutil/trace.go:171","msg":"trace[240708071] linearizableReadLoop","detail":"{readStateIndex:462; appliedIndex:461; }","duration":"136.366175ms","start":"2024-08-15T17:59:54.960393Z","end":"2024-08-15T17:59:55.096759Z","steps":["trace[240708071] 'read index received'  (duration: 23.461µs)","trace[240708071] 'applied index is now lower than readState.Index'  (duration: 136.340902ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T17:59:55.096944Z","caller":"traceutil/trace.go:171","msg":"trace[1919787891] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"235.435522ms","start":"2024-08-15T17:59:54.861497Z","end":"2024-08-15T17:59:55.096932Z","steps":["trace[1919787891] 'process raft request'  (duration: 87.662459ms)","trace[1919787891] 'compare'  (duration: 146.731994ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T17:59:55.097254Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.850909ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-769827-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:59:55.097341Z","caller":"traceutil/trace.go:171","msg":"trace[1680751135] range","detail":"{range_begin:/registry/minions/multinode-769827-m02; range_end:; response_count:0; response_revision:442; }","duration":"136.935176ms","start":"2024-08-15T17:59:54.960389Z","end":"2024-08-15T17:59:55.097324Z","steps":["trace[1680751135] 'agreement among raft nodes before linearized reading'  (duration: 136.8082ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:00:00.402997Z","caller":"traceutil/trace.go:171","msg":"trace[1278451307] linearizableReadLoop","detail":"{readStateIndex:503; appliedIndex:502; }","duration":"142.148778ms","start":"2024-08-15T18:00:00.260833Z","end":"2024-08-15T18:00:00.402982Z","steps":["trace[1278451307] 'read index received'  (duration: 141.99755ms)","trace[1278451307] 'applied index is now lower than readState.Index'  (duration: 150.722µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:00:00.403176Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.314544ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-15T18:00:00.403220Z","caller":"traceutil/trace.go:171","msg":"trace[151106694] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:481; }","duration":"142.384437ms","start":"2024-08-15T18:00:00.260829Z","end":"2024-08-15T18:00:00.403213Z","steps":["trace[151106694] 'agreement among raft nodes before linearized reading'  (duration: 142.295595ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:00:00.403236Z","caller":"traceutil/trace.go:171","msg":"trace[1693998488] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"155.506903ms","start":"2024-08-15T18:00:00.247716Z","end":"2024-08-15T18:00:00.403223Z","steps":["trace[1693998488] 'process raft request'  (duration: 155.157007ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:00:50.806192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.4541ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9419438424321490062 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-769827-m03.17ebf8cf0cdf8797\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-769827-m03.17ebf8cf0cdf8797\" value_size:646 lease:196066387466713866 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-15T18:00:50.806391Z","caller":"traceutil/trace.go:171","msg":"trace[2086854109] linearizableReadLoop","detail":"{readStateIndex:608; appliedIndex:607; }","duration":"209.124861ms","start":"2024-08-15T18:00:50.597233Z","end":"2024-08-15T18:00:50.806358Z","steps":["trace[2086854109] 'read index received'  (duration: 54.306132ms)","trace[2086854109] 'applied index is now lower than readState.Index'  (duration: 154.817804ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T18:00:50.806504Z","caller":"traceutil/trace.go:171","msg":"trace[1053483193] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"230.59659ms","start":"2024-08-15T18:00:50.575891Z","end":"2024-08-15T18:00:50.806487Z","steps":["trace[1053483193] 'process raft request'  (duration: 75.680765ms)","trace[1053483193] 'compare'  (duration: 154.333666ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:00:50.806846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.607844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-769827-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:00:50.806953Z","caller":"traceutil/trace.go:171","msg":"trace[1078551274] range","detail":"{range_begin:/registry/minions/multinode-769827-m03; range_end:; response_count:0; response_revision:575; }","duration":"209.712271ms","start":"2024-08-15T18:00:50.597228Z","end":"2024-08-15T18:00:50.806941Z","steps":["trace[1078551274] 'agreement among raft nodes before linearized reading'  (duration: 209.592282ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:00:50.806846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.547762ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-08-15T18:00:50.807092Z","caller":"traceutil/trace.go:171","msg":"trace[2094307122] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:575; }","duration":"169.808609ms","start":"2024-08-15T18:00:50.637275Z","end":"2024-08-15T18:00:50.807084Z","steps":["trace[2094307122] 'agreement among raft nodes before linearized reading'  (duration: 169.402237ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:04:05.417942Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-15T18:04:05.418080Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-769827","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.73:2380"],"advertise-client-urls":["https://192.168.39.73:2379"]}
	{"level":"warn","ts":"2024-08-15T18:04:05.418319Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T18:04:05.418441Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T18:04:05.470490Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.73:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T18:04:05.470550Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.73:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T18:04:05.470665Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"217be714ae9a82b8","current-leader-member-id":"217be714ae9a82b8"}
	{"level":"info","ts":"2024-08-15T18:04:05.477521Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2024-08-15T18:04:05.477742Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2024-08-15T18:04:05.477778Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-769827","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.73:2380"],"advertise-client-urls":["https://192.168.39.73:2379"]}
	
	
	==> etcd [704afc72580d06b4fe0dccbfd7555c08d6f40ffe914a25d4a364c4b84ce5ccb5] <==
	{"level":"info","ts":"2024-08-15T18:05:46.816006Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"97141299b087eff6","local-member-id":"217be714ae9a82b8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:05:46.816049Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:05:46.820790Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:05:46.827931Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T18:05:46.828218Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"217be714ae9a82b8","initial-advertise-peer-urls":["https://192.168.39.73:2380"],"listen-peer-urls":["https://192.168.39.73:2380"],"advertise-client-urls":["https://192.168.39.73:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.73:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T18:05:46.828263Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T18:05:46.828329Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2024-08-15T18:05:46.828351Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2024-08-15T18:05:47.820328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T18:05:47.820368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T18:05:47.820435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 received MsgPreVoteResp from 217be714ae9a82b8 at term 2"}
	{"level":"info","ts":"2024-08-15T18:05:47.820450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T18:05:47.820456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 received MsgVoteResp from 217be714ae9a82b8 at term 3"}
	{"level":"info","ts":"2024-08-15T18:05:47.820475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became leader at term 3"}
	{"level":"info","ts":"2024-08-15T18:05:47.820482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 217be714ae9a82b8 elected leader 217be714ae9a82b8 at term 3"}
	{"level":"info","ts":"2024-08-15T18:05:47.826215Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:05:47.827367Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:05:47.828308Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.73:2379"}
	{"level":"info","ts":"2024-08-15T18:05:47.828747Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:05:47.829364Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:05:47.830271Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T18:05:47.826162Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"217be714ae9a82b8","local-member-attributes":"{Name:multinode-769827 ClientURLs:[https://192.168.39.73:2379]}","request-path":"/0/members/217be714ae9a82b8/attributes","cluster-id":"97141299b087eff6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T18:05:47.831805Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T18:05:47.831836Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T18:07:11.204120Z","caller":"traceutil/trace.go:171","msg":"trace[248975640] transaction","detail":"{read_only:false; response_revision:1132; number_of_response:1; }","duration":"125.788156ms","start":"2024-08-15T18:07:11.078291Z","end":"2024-08-15T18:07:11.204079Z","steps":["trace[248975640] 'process raft request'  (duration: 125.671057ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:07:30 up 9 min,  0 users,  load average: 0.23, 0.16, 0.07
	Linux multinode-769827 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77] <==
	I0815 18:03:23.237420       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.3.0/24] 
	I0815 18:03:33.233320       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:03:33.233347       1 main.go:299] handling current node
	I0815 18:03:33.233361       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:03:33.233366       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:03:33.233495       1 main.go:295] Handling node with IPs: map[192.168.39.143:{}]
	I0815 18:03:33.233519       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.3.0/24] 
	I0815 18:03:43.227704       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:03:43.227754       1 main.go:299] handling current node
	I0815 18:03:43.227768       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:03:43.227774       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:03:43.227932       1 main.go:295] Handling node with IPs: map[192.168.39.143:{}]
	I0815 18:03:43.227960       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.3.0/24] 
	I0815 18:03:53.236730       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:03:53.236833       1 main.go:299] handling current node
	I0815 18:03:53.236863       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:03:53.236882       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:03:53.237021       1 main.go:295] Handling node with IPs: map[192.168.39.143:{}]
	I0815 18:03:53.237044       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.3.0/24] 
	I0815 18:04:03.235722       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:04:03.235779       1 main.go:299] handling current node
	I0815 18:04:03.235803       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:04:03.235811       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:04:03.235956       1 main.go:295] Handling node with IPs: map[192.168.39.143:{}]
	I0815 18:04:03.235980       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c133435cb4e31b26fc4a909ae3c3199af3cccda5810b2ca8937b4860708ebb3b] <==
	I0815 18:06:41.334760       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.3.0/24] 
	I0815 18:06:51.334893       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:06:51.335057       1 main.go:299] handling current node
	I0815 18:06:51.335095       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:06:51.335122       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:06:51.335403       1 main.go:295] Handling node with IPs: map[192.168.39.143:{}]
	I0815 18:06:51.335908       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.3.0/24] 
	I0815 18:07:01.334684       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:07:01.334873       1 main.go:299] handling current node
	I0815 18:07:01.334910       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:07:01.334935       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:07:01.335163       1 main.go:295] Handling node with IPs: map[192.168.39.143:{}]
	I0815 18:07:01.335212       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.3.0/24] 
	I0815 18:07:11.334244       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:07:11.334297       1 main.go:299] handling current node
	I0815 18:07:11.334316       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:07:11.334323       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:07:11.334513       1 main.go:295] Handling node with IPs: map[192.168.39.143:{}]
	I0815 18:07:11.334522       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.2.0/24] 
	I0815 18:07:21.335773       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:07:21.335817       1 main.go:299] handling current node
	I0815 18:07:21.335832       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:07:21.335837       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:07:21.335957       1 main.go:295] Handling node with IPs: map[192.168.39.143:{}]
	I0815 18:07:21.335978       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0c69af92d63ad4a21ab07081944894a987e424ff5f5b2023f89830f44a6cd7d6] <==
	I0815 18:05:49.192142       1 aggregator.go:171] initial CRD sync complete...
	I0815 18:05:49.192243       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 18:05:49.192256       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 18:05:49.226015       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 18:05:49.236123       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 18:05:49.236164       1 policy_source.go:224] refreshing policies
	I0815 18:05:49.238170       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 18:05:49.285568       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 18:05:49.289494       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 18:05:49.291562       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 18:05:49.289502       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 18:05:49.289511       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 18:05:49.292108       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 18:05:49.294887       1 cache.go:39] Caches are synced for autoregister controller
	I0815 18:05:49.294903       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 18:05:49.303168       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0815 18:05:49.304132       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0815 18:05:50.098195       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 18:05:51.272387       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 18:05:51.410210       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 18:05:51.422993       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 18:05:51.482930       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 18:05:51.489180       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 18:05:52.811848       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 18:05:52.961510       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0] <==
	W0815 18:04:05.459249       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.459344       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.459434       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.459466       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.459557       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460310       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460351       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460444       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460530       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460705       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460744       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460832       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460928       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460959       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.461399       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.461511       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464190       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464270       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464305       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464336       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464376       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464410       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464443       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464478       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464516       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed] <==
	I0815 18:01:38.646880       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:38.647009       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-769827-m02"
	I0815 18:01:40.183093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-769827-m02"
	I0815 18:01:40.183150       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-769827-m03\" does not exist"
	I0815 18:01:40.198205       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-769827-m03" podCIDRs=["10.244.3.0/24"]
	I0815 18:01:40.198245       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:40.198266       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:40.207147       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:40.220286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:40.542003       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:41.556654       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:50.284481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:59.988105       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-769827-m02"
	I0815 18:01:59.988359       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:02:00.004846       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:02:01.468177       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:02:41.489960       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-769827-m03"
	I0815 18:02:41.491852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m02"
	I0815 18:02:41.496322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:02:41.515005       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m02"
	I0815 18:02:41.526662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:02:41.544679       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.40158ms"
	I0815 18:02:41.545226       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.793µs"
	I0815 18:02:46.558650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m02"
	I0815 18:02:56.634498       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	
	
	==> kube-controller-manager [8123420b5cbe4bb676a5a956fe125b3e54508669d4d317a2985ab9174ee33dfc] <==
	I0815 18:06:48.847347       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m02"
	I0815 18:06:48.857382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.272µs"
	I0815 18:06:48.872286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="44.914µs"
	I0815 18:06:52.455951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.011518ms"
	I0815 18:06:52.456058       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.617µs"
	I0815 18:06:52.764714       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m02"
	I0815 18:07:00.658099       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m02"
	I0815 18:07:06.589031       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:06.611182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:06.837867       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-769827-m02"
	I0815 18:07:06.837969       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:07.925679       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-769827-m03\" does not exist"
	I0815 18:07:07.925964       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-769827-m02"
	I0815 18:07:07.936748       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-769827-m03" podCIDRs=["10.244.2.0/24"]
	I0815 18:07:07.936785       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:07.936935       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:07.946565       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:08.379029       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:08.725991       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:12.883356       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:18.140512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:26.591750       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-769827-m03"
	I0815 18:07:26.591890       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:26.600410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:27.782569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	
	
	==> kube-proxy [5dd5e3abd823c15ad5896347396d30ad6519b1209e5b4c1a886706d0489ed082] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 18:05:50.627723       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 18:05:50.645543       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.73"]
	E0815 18:05:50.645729       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 18:05:50.706716       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 18:05:50.706887       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 18:05:50.706976       1 server_linux.go:169] "Using iptables Proxier"
	I0815 18:05:50.709674       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 18:05:50.709978       1 server.go:483] "Version info" version="v1.31.0"
	I0815 18:05:50.710135       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:05:50.711529       1 config.go:197] "Starting service config controller"
	I0815 18:05:50.711958       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 18:05:50.712025       1 config.go:104] "Starting endpoint slice config controller"
	I0815 18:05:50.712043       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 18:05:50.712581       1 config.go:326] "Starting node config controller"
	I0815 18:05:50.713268       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 18:05:50.812782       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 18:05:50.812825       1 shared_informer.go:320] Caches are synced for service config
	I0815 18:05:50.814312       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 17:59:08.500017       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 17:59:08.515944       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.73"]
	E0815 17:59:08.516024       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:59:08.560879       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 17:59:08.560930       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 17:59:08.560967       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:59:08.567268       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:59:08.567518       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:59:08.567549       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:59:08.570884       1 config.go:197] "Starting service config controller"
	I0815 17:59:08.570935       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:59:08.570960       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:59:08.570964       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:59:08.571664       1 config.go:326] "Starting node config controller"
	I0815 17:59:08.571690       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:59:08.671813       1 shared_informer.go:320] Caches are synced for node config
	I0815 17:59:08.671847       1 shared_informer.go:320] Caches are synced for service config
	I0815 17:59:08.671860       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a] <==
	E0815 17:58:59.592443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:58:59.592401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 17:58:59.592662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.493836       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 17:59:00.493940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.558358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 17:59:00.558488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.672788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 17:59:00.672840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.737548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 17:59:00.737637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.758971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:59:00.759020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.761825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 17:59:00.761869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.777501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 17:59:00.777553       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.778795       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 17:59:00.778834       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 17:59:00.781247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 17:59:00.781285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.877722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 17:59:00.877773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0815 17:59:02.787867       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 18:04:05.427522       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f9907340fbd8cc209ebb3f5fa117f1000cfd6cf09830b4e6100a0a08d0015716] <==
	I0815 18:05:47.655158       1 serving.go:386] Generated self-signed cert in-memory
	I0815 18:05:49.259327       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 18:05:49.259578       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:05:49.266770       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 18:05:49.267113       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0815 18:05:49.267222       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0815 18:05:49.267326       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 18:05:49.268853       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 18:05:49.268952       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 18:05:49.269056       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0815 18:05:49.269080       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0815 18:05:49.367795       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0815 18:05:49.369148       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 18:05:49.369413       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Aug 15 18:05:55 multinode-769827 kubelet[2987]: E0815 18:05:55.735059    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745155734190421,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:05:57 multinode-769827 kubelet[2987]: I0815 18:05:57.113561    2987 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Aug 15 18:06:05 multinode-769827 kubelet[2987]: E0815 18:06:05.737719    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745165736351825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:06:05 multinode-769827 kubelet[2987]: E0815 18:06:05.737807    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745165736351825,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:06:15 multinode-769827 kubelet[2987]: E0815 18:06:15.740111    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745175739156580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:06:15 multinode-769827 kubelet[2987]: E0815 18:06:15.740273    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745175739156580,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:06:25 multinode-769827 kubelet[2987]: E0815 18:06:25.741913    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745185741681363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:06:25 multinode-769827 kubelet[2987]: E0815 18:06:25.741941    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745185741681363,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:06:35 multinode-769827 kubelet[2987]: E0815 18:06:35.744208    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745195743924381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:06:35 multinode-769827 kubelet[2987]: E0815 18:06:35.744243    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745195743924381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:06:45 multinode-769827 kubelet[2987]: E0815 18:06:45.694303    2987 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 18:06:45 multinode-769827 kubelet[2987]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 18:06:45 multinode-769827 kubelet[2987]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 18:06:45 multinode-769827 kubelet[2987]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 18:06:45 multinode-769827 kubelet[2987]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 18:06:45 multinode-769827 kubelet[2987]: E0815 18:06:45.750130    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745205749851381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:06:45 multinode-769827 kubelet[2987]: E0815 18:06:45.750158    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745205749851381,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:06:55 multinode-769827 kubelet[2987]: E0815 18:06:55.752861    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745215752008409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:06:55 multinode-769827 kubelet[2987]: E0815 18:06:55.753410    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745215752008409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:07:05 multinode-769827 kubelet[2987]: E0815 18:07:05.760123    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745225755203844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:07:05 multinode-769827 kubelet[2987]: E0815 18:07:05.760329    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745225755203844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:07:15 multinode-769827 kubelet[2987]: E0815 18:07:15.761357    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745235761162479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:07:15 multinode-769827 kubelet[2987]: E0815 18:07:15.761423    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745235761162479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:07:25 multinode-769827 kubelet[2987]: E0815 18:07:25.764759    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745245763780049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:07:25 multinode-769827 kubelet[2987]: E0815 18:07:25.764795    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745245763780049,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:07:29.152486   51783 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19450-13013/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-769827 -n multinode-769827
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-769827 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (328.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 stop
E0815 18:07:47.734298   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-769827 stop: exit status 82 (2m0.44888143s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-769827-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-769827 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 status
E0815 18:09:35.294038   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-769827 status: exit status 3 (18.869902435s)

                                                
                                                
-- stdout --
	multinode-769827
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-769827-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:09:52.208769   52434 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0815 18:09:52.208805   52434 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-769827 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-769827 -n multinode-769827
E0815 18:09:52.218211   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-769827 logs -n 25: (1.432246349s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-769827 cp multinode-769827-m02:/home/docker/cp-test.txt                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827:/home/docker/cp-test_multinode-769827-m02_multinode-769827.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n multinode-769827 sudo cat                                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | /home/docker/cp-test_multinode-769827-m02_multinode-769827.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-769827 cp multinode-769827-m02:/home/docker/cp-test.txt                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03:/home/docker/cp-test_multinode-769827-m02_multinode-769827-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n multinode-769827-m03 sudo cat                                   | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | /home/docker/cp-test_multinode-769827-m02_multinode-769827-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-769827 cp testdata/cp-test.txt                                                | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-769827 cp multinode-769827-m03:/home/docker/cp-test.txt                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3791465198/001/cp-test_multinode-769827-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-769827 cp multinode-769827-m03:/home/docker/cp-test.txt                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827:/home/docker/cp-test_multinode-769827-m03_multinode-769827.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n multinode-769827 sudo cat                                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | /home/docker/cp-test_multinode-769827-m03_multinode-769827.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-769827 cp multinode-769827-m03:/home/docker/cp-test.txt                       | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m02:/home/docker/cp-test_multinode-769827-m03_multinode-769827-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n multinode-769827-m02 sudo cat                                   | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | /home/docker/cp-test_multinode-769827-m03_multinode-769827-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-769827 node stop m03                                                          | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	| node    | multinode-769827 node start                                                             | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:02 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-769827                                                                | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:02 UTC |                     |
	| stop    | -p multinode-769827                                                                     | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:02 UTC |                     |
	| start   | -p multinode-769827                                                                     | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:04 UTC | 15 Aug 24 18:07 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-769827                                                                | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:07 UTC |                     |
	| node    | multinode-769827 node delete                                                            | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:07 UTC | 15 Aug 24 18:07 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-769827 stop                                                                   | multinode-769827 | jenkins | v1.33.1 | 15 Aug 24 18:07 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:04:04
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:04:04.536275   50711 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:04:04.536548   50711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:04:04.536557   50711 out.go:358] Setting ErrFile to fd 2...
	I0815 18:04:04.536562   50711 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:04:04.536734   50711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:04:04.537227   50711 out.go:352] Setting JSON to false
	I0815 18:04:04.538056   50711 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6390,"bootTime":1723738654,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:04:04.538120   50711 start.go:139] virtualization: kvm guest
	I0815 18:04:04.540451   50711 out.go:177] * [multinode-769827] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:04:04.542097   50711 notify.go:220] Checking for updates...
	I0815 18:04:04.542137   50711 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:04:04.543622   50711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:04:04.545193   50711 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:04:04.546202   50711 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:04:04.547394   50711 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:04:04.548586   50711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:04:04.550282   50711 config.go:182] Loaded profile config "multinode-769827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:04:04.550369   50711 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:04:04.550900   50711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:04:04.550972   50711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:04:04.566990   50711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I0815 18:04:04.567422   50711 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:04:04.567949   50711 main.go:141] libmachine: Using API Version  1
	I0815 18:04:04.567968   50711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:04:04.568380   50711 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:04:04.568663   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:04:04.605070   50711 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:04:04.606399   50711 start.go:297] selected driver: kvm2
	I0815 18:04:04.606423   50711 start.go:901] validating driver "kvm2" against &{Name:multinode-769827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:multinode-769827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.143 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:04:04.606563   50711 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:04:04.606901   50711 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:04:04.606963   50711 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:04:04.621754   50711 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:04:04.622389   50711 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:04:04.622449   50711 cni.go:84] Creating CNI manager for ""
	I0815 18:04:04.622460   50711 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 18:04:04.622517   50711 start.go:340] cluster config:
	{Name:multinode-769827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-769827 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.143 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:04:04.622630   50711 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:04:04.625122   50711 out.go:177] * Starting "multinode-769827" primary control-plane node in "multinode-769827" cluster
	I0815 18:04:04.626386   50711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:04:04.626422   50711 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:04:04.626437   50711 cache.go:56] Caching tarball of preloaded images
	I0815 18:04:04.626506   50711 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:04:04.626516   50711 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 18:04:04.626620   50711 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/config.json ...
	I0815 18:04:04.626795   50711 start.go:360] acquireMachinesLock for multinode-769827: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:04:04.626832   50711 start.go:364] duration metric: took 21.682µs to acquireMachinesLock for "multinode-769827"
	I0815 18:04:04.626855   50711 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:04:04.626862   50711 fix.go:54] fixHost starting: 
	I0815 18:04:04.627123   50711 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:04:04.627153   50711 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:04:04.641317   50711 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45169
	I0815 18:04:04.641779   50711 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:04:04.642281   50711 main.go:141] libmachine: Using API Version  1
	I0815 18:04:04.642302   50711 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:04:04.642683   50711 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:04:04.642849   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:04:04.642997   50711 main.go:141] libmachine: (multinode-769827) Calling .GetState
	I0815 18:04:04.644573   50711 fix.go:112] recreateIfNeeded on multinode-769827: state=Running err=<nil>
	W0815 18:04:04.644600   50711 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:04:04.646559   50711 out.go:177] * Updating the running kvm2 "multinode-769827" VM ...
	I0815 18:04:04.647738   50711 machine.go:93] provisionDockerMachine start ...
	I0815 18:04:04.647762   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:04:04.647960   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:04:04.650584   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:04.651000   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:04.651039   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:04.651164   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:04:04.651338   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:04.651513   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:04.651656   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:04:04.651820   50711 main.go:141] libmachine: Using SSH client type: native
	I0815 18:04:04.652033   50711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0815 18:04:04.652048   50711 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:04:04.770393   50711 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-769827
	
	I0815 18:04:04.770417   50711 main.go:141] libmachine: (multinode-769827) Calling .GetMachineName
	I0815 18:04:04.770717   50711 buildroot.go:166] provisioning hostname "multinode-769827"
	I0815 18:04:04.770741   50711 main.go:141] libmachine: (multinode-769827) Calling .GetMachineName
	I0815 18:04:04.770916   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:04:04.773577   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:04.773957   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:04.773992   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:04.774068   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:04:04.774258   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:04.774397   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:04.774542   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:04:04.774730   50711 main.go:141] libmachine: Using SSH client type: native
	I0815 18:04:04.774902   50711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0815 18:04:04.774915   50711 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-769827 && echo "multinode-769827" | sudo tee /etc/hostname
	I0815 18:04:04.912302   50711 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-769827
	
	I0815 18:04:04.912334   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:04:04.914903   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:04.915217   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:04.915260   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:04.915416   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:04:04.915608   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:04.915767   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:04.916003   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:04:04.916173   50711 main.go:141] libmachine: Using SSH client type: native
	I0815 18:04:04.916332   50711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0815 18:04:04.916348   50711 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-769827' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-769827/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-769827' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:04:05.034355   50711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:04:05.034405   50711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:04:05.034436   50711 buildroot.go:174] setting up certificates
	I0815 18:04:05.034452   50711 provision.go:84] configureAuth start
	I0815 18:04:05.034469   50711 main.go:141] libmachine: (multinode-769827) Calling .GetMachineName
	I0815 18:04:05.034705   50711 main.go:141] libmachine: (multinode-769827) Calling .GetIP
	I0815 18:04:05.037455   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.037818   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:05.037844   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.037962   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:04:05.040038   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.040524   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:05.040550   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.040674   50711 provision.go:143] copyHostCerts
	I0815 18:04:05.040703   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:04:05.040742   50711 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:04:05.040755   50711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:04:05.040823   50711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:04:05.040906   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:04:05.040930   50711 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:04:05.040939   50711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:04:05.040973   50711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:04:05.041033   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:04:05.041053   50711 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:04:05.041062   50711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:04:05.041107   50711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:04:05.041179   50711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.multinode-769827 san=[127.0.0.1 192.168.39.73 localhost minikube multinode-769827]
	I0815 18:04:05.113018   50711 provision.go:177] copyRemoteCerts
	I0815 18:04:05.113081   50711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:04:05.113102   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:04:05.115719   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.116028   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:05.116056   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.116183   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:04:05.116356   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:05.116529   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:04:05.116644   50711 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/multinode-769827/id_rsa Username:docker}
	I0815 18:04:05.208573   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0815 18:04:05.208635   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:04:05.233618   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0815 18:04:05.233684   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0815 18:04:05.258637   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0815 18:04:05.258716   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 18:04:05.282418   50711 provision.go:87] duration metric: took 247.953382ms to configureAuth
	I0815 18:04:05.282441   50711 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:04:05.282662   50711 config.go:182] Loaded profile config "multinode-769827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:04:05.282734   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:04:05.285144   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.285534   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:04:05.285564   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:04:05.285700   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:04:05.285894   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:05.286044   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:04:05.286166   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:04:05.286305   50711 main.go:141] libmachine: Using SSH client type: native
	I0815 18:04:05.286472   50711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0815 18:04:05.286488   50711 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:05:36.039243   50711 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:05:36.039306   50711 machine.go:96] duration metric: took 1m31.391524206s to provisionDockerMachine
	I0815 18:05:36.039319   50711 start.go:293] postStartSetup for "multinode-769827" (driver="kvm2")
	I0815 18:05:36.039330   50711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:05:36.039351   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:05:36.039714   50711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:05:36.039747   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:05:36.042693   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.043156   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:05:36.043186   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.043347   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:05:36.043513   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:05:36.043653   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:05:36.043762   50711 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/multinode-769827/id_rsa Username:docker}
	I0815 18:05:36.132030   50711 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:05:36.136472   50711 command_runner.go:130] > NAME=Buildroot
	I0815 18:05:36.136508   50711 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0815 18:05:36.136515   50711 command_runner.go:130] > ID=buildroot
	I0815 18:05:36.136526   50711 command_runner.go:130] > VERSION_ID=2023.02.9
	I0815 18:05:36.136534   50711 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0815 18:05:36.136575   50711 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:05:36.136593   50711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:05:36.136662   50711 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:05:36.136734   50711 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:05:36.136743   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /etc/ssl/certs/202192.pem
	I0815 18:05:36.136823   50711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:05:36.146456   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:05:36.171777   50711 start.go:296] duration metric: took 132.445038ms for postStartSetup
	I0815 18:05:36.171822   50711 fix.go:56] duration metric: took 1m31.54495919s for fixHost
	I0815 18:05:36.171846   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:05:36.174685   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.175156   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:05:36.175208   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.175364   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:05:36.175564   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:05:36.175702   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:05:36.175819   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:05:36.175961   50711 main.go:141] libmachine: Using SSH client type: native
	I0815 18:05:36.176142   50711 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I0815 18:05:36.176156   50711 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:05:36.289590   50711 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723745136.269521186
	
	I0815 18:05:36.289621   50711 fix.go:216] guest clock: 1723745136.269521186
	I0815 18:05:36.289633   50711 fix.go:229] Guest: 2024-08-15 18:05:36.269521186 +0000 UTC Remote: 2024-08-15 18:05:36.171828223 +0000 UTC m=+91.669935516 (delta=97.692963ms)
	I0815 18:05:36.289662   50711 fix.go:200] guest clock delta is within tolerance: 97.692963ms
	I0815 18:05:36.289688   50711 start.go:83] releasing machines lock for "multinode-769827", held for 1m31.662827859s
	I0815 18:05:36.289719   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:05:36.289990   50711 main.go:141] libmachine: (multinode-769827) Calling .GetIP
	I0815 18:05:36.292957   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.293289   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:05:36.293319   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.293610   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:05:36.294114   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:05:36.294275   50711 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:05:36.294384   50711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:05:36.294417   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:05:36.294504   50711 ssh_runner.go:195] Run: cat /version.json
	I0815 18:05:36.294522   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:05:36.296878   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.297149   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.297220   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:05:36.297240   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.297406   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:05:36.297557   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:05:36.297588   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:05:36.297651   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:36.297704   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:05:36.297790   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:05:36.297871   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:05:36.297954   50711 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/multinode-769827/id_rsa Username:docker}
	I0815 18:05:36.297980   50711 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:05:36.298089   50711 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/multinode-769827/id_rsa Username:docker}
	I0815 18:05:36.377837   50711 command_runner.go:130] > {"iso_version": "v1.33.1-1723650137-19443", "kicbase_version": "v0.0.44-1723567951-19429", "minikube_version": "v1.33.1", "commit": "0de88034feeac7cdc6e3fa82af59b9e46ac52b3e"}
	I0815 18:05:36.378030   50711 ssh_runner.go:195] Run: systemctl --version
	I0815 18:05:36.402962   50711 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0815 18:05:36.403014   50711 command_runner.go:130] > systemd 252 (252)
	I0815 18:05:36.403038   50711 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0815 18:05:36.403109   50711 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:05:36.566434   50711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 18:05:36.573211   50711 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0815 18:05:36.573669   50711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:05:36.573747   50711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:05:36.583143   50711 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 18:05:36.583164   50711 start.go:495] detecting cgroup driver to use...
	I0815 18:05:36.583233   50711 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:05:36.600529   50711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:05:36.615338   50711 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:05:36.615428   50711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:05:36.629543   50711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:05:36.642960   50711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:05:36.788221   50711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:05:36.931352   50711 docker.go:233] disabling docker service ...
	I0815 18:05:36.931425   50711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:05:36.947024   50711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:05:36.960693   50711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:05:37.100696   50711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:05:37.254567   50711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:05:37.269424   50711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:05:37.288346   50711 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0815 18:05:37.288654   50711 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:05:37.288704   50711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.299633   50711 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:05:37.299698   50711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.310602   50711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.321176   50711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.332819   50711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:05:37.344207   50711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.355147   50711 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.366091   50711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:05:37.376990   50711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:05:37.387168   50711 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0815 18:05:37.387261   50711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:05:37.396602   50711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:05:37.533930   50711 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:05:42.582037   50711 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.048069221s)
	I0815 18:05:42.582077   50711 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:05:42.582140   50711 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:05:42.587179   50711 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0815 18:05:42.587205   50711 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0815 18:05:42.587219   50711 command_runner.go:130] > Device: 0,22	Inode: 1323        Links: 1
	I0815 18:05:42.587228   50711 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0815 18:05:42.587235   50711 command_runner.go:130] > Access: 2024-08-15 18:05:42.457805645 +0000
	I0815 18:05:42.587244   50711 command_runner.go:130] > Modify: 2024-08-15 18:05:42.457805645 +0000
	I0815 18:05:42.587254   50711 command_runner.go:130] > Change: 2024-08-15 18:05:42.457805645 +0000
	I0815 18:05:42.587260   50711 command_runner.go:130] >  Birth: -
	I0815 18:05:42.587280   50711 start.go:563] Will wait 60s for crictl version
	I0815 18:05:42.587321   50711 ssh_runner.go:195] Run: which crictl
	I0815 18:05:42.591344   50711 command_runner.go:130] > /usr/bin/crictl
	I0815 18:05:42.591427   50711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:05:42.630320   50711 command_runner.go:130] > Version:  0.1.0
	I0815 18:05:42.630460   50711 command_runner.go:130] > RuntimeName:  cri-o
	I0815 18:05:42.630479   50711 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0815 18:05:42.630573   50711 command_runner.go:130] > RuntimeApiVersion:  v1
	I0815 18:05:42.631777   50711 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:05:42.631840   50711 ssh_runner.go:195] Run: crio --version
	I0815 18:05:42.658659   50711 command_runner.go:130] > crio version 1.29.1
	I0815 18:05:42.658681   50711 command_runner.go:130] > Version:        1.29.1
	I0815 18:05:42.658687   50711 command_runner.go:130] > GitCommit:      unknown
	I0815 18:05:42.658691   50711 command_runner.go:130] > GitCommitDate:  unknown
	I0815 18:05:42.658695   50711 command_runner.go:130] > GitTreeState:   clean
	I0815 18:05:42.658700   50711 command_runner.go:130] > BuildDate:      2024-08-14T19:54:05Z
	I0815 18:05:42.658705   50711 command_runner.go:130] > GoVersion:      go1.21.6
	I0815 18:05:42.658708   50711 command_runner.go:130] > Compiler:       gc
	I0815 18:05:42.658713   50711 command_runner.go:130] > Platform:       linux/amd64
	I0815 18:05:42.658716   50711 command_runner.go:130] > Linkmode:       dynamic
	I0815 18:05:42.658720   50711 command_runner.go:130] > BuildTags:      
	I0815 18:05:42.658724   50711 command_runner.go:130] >   containers_image_ostree_stub
	I0815 18:05:42.658729   50711 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0815 18:05:42.658733   50711 command_runner.go:130] >   btrfs_noversion
	I0815 18:05:42.658738   50711 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0815 18:05:42.658744   50711 command_runner.go:130] >   libdm_no_deferred_remove
	I0815 18:05:42.658748   50711 command_runner.go:130] >   seccomp
	I0815 18:05:42.658771   50711 command_runner.go:130] > LDFlags:          unknown
	I0815 18:05:42.658779   50711 command_runner.go:130] > SeccompEnabled:   true
	I0815 18:05:42.658784   50711 command_runner.go:130] > AppArmorEnabled:  false
	I0815 18:05:42.659967   50711 ssh_runner.go:195] Run: crio --version
	I0815 18:05:42.688024   50711 command_runner.go:130] > crio version 1.29.1
	I0815 18:05:42.688054   50711 command_runner.go:130] > Version:        1.29.1
	I0815 18:05:42.688062   50711 command_runner.go:130] > GitCommit:      unknown
	I0815 18:05:42.688068   50711 command_runner.go:130] > GitCommitDate:  unknown
	I0815 18:05:42.688073   50711 command_runner.go:130] > GitTreeState:   clean
	I0815 18:05:42.688080   50711 command_runner.go:130] > BuildDate:      2024-08-14T19:54:05Z
	I0815 18:05:42.688086   50711 command_runner.go:130] > GoVersion:      go1.21.6
	I0815 18:05:42.688092   50711 command_runner.go:130] > Compiler:       gc
	I0815 18:05:42.688098   50711 command_runner.go:130] > Platform:       linux/amd64
	I0815 18:05:42.688104   50711 command_runner.go:130] > Linkmode:       dynamic
	I0815 18:05:42.688110   50711 command_runner.go:130] > BuildTags:      
	I0815 18:05:42.688116   50711 command_runner.go:130] >   containers_image_ostree_stub
	I0815 18:05:42.688123   50711 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0815 18:05:42.688129   50711 command_runner.go:130] >   btrfs_noversion
	I0815 18:05:42.688135   50711 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0815 18:05:42.688141   50711 command_runner.go:130] >   libdm_no_deferred_remove
	I0815 18:05:42.688154   50711 command_runner.go:130] >   seccomp
	I0815 18:05:42.688165   50711 command_runner.go:130] > LDFlags:          unknown
	I0815 18:05:42.688172   50711 command_runner.go:130] > SeccompEnabled:   true
	I0815 18:05:42.688180   50711 command_runner.go:130] > AppArmorEnabled:  false
	I0815 18:05:42.690163   50711 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:05:42.691354   50711 main.go:141] libmachine: (multinode-769827) Calling .GetIP
	I0815 18:05:42.694137   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:42.694485   50711 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:05:42.694509   50711 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:05:42.694718   50711 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:05:42.699044   50711 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0815 18:05:42.699141   50711 kubeadm.go:883] updating cluster {Name:multinode-769827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.0 ClusterName:multinode-769827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.143 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:05:42.699264   50711 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:05:42.699303   50711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:05:42.744665   50711 command_runner.go:130] > {
	I0815 18:05:42.744690   50711 command_runner.go:130] >   "images": [
	I0815 18:05:42.744695   50711 command_runner.go:130] >     {
	I0815 18:05:42.744703   50711 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0815 18:05:42.744708   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.744713   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0815 18:05:42.744717   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744721   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.744729   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0815 18:05:42.744735   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0815 18:05:42.744747   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744752   50711 command_runner.go:130] >       "size": "87165492",
	I0815 18:05:42.744757   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.744760   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.744766   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.744775   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.744779   50711 command_runner.go:130] >     },
	I0815 18:05:42.744785   50711 command_runner.go:130] >     {
	I0815 18:05:42.744791   50711 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0815 18:05:42.744795   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.744800   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0815 18:05:42.744804   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744808   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.744815   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0815 18:05:42.744825   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0815 18:05:42.744831   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744835   50711 command_runner.go:130] >       "size": "87190579",
	I0815 18:05:42.744841   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.744850   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.744857   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.744861   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.744864   50711 command_runner.go:130] >     },
	I0815 18:05:42.744868   50711 command_runner.go:130] >     {
	I0815 18:05:42.744874   50711 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0815 18:05:42.744878   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.744883   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0815 18:05:42.744887   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744891   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.744898   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0815 18:05:42.744907   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0815 18:05:42.744911   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744917   50711 command_runner.go:130] >       "size": "1363676",
	I0815 18:05:42.744921   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.744925   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.744930   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.744934   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.744941   50711 command_runner.go:130] >     },
	I0815 18:05:42.744947   50711 command_runner.go:130] >     {
	I0815 18:05:42.744953   50711 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0815 18:05:42.744957   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.744962   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0815 18:05:42.744966   50711 command_runner.go:130] >       ],
	I0815 18:05:42.744970   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.744977   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0815 18:05:42.744992   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0815 18:05:42.744998   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745002   50711 command_runner.go:130] >       "size": "31470524",
	I0815 18:05:42.745008   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.745013   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745019   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745023   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745029   50711 command_runner.go:130] >     },
	I0815 18:05:42.745033   50711 command_runner.go:130] >     {
	I0815 18:05:42.745041   50711 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0815 18:05:42.745046   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745050   50711 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0815 18:05:42.745056   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745059   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745072   50711 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0815 18:05:42.745081   50711 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0815 18:05:42.745087   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745092   50711 command_runner.go:130] >       "size": "61245718",
	I0815 18:05:42.745115   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.745119   50711 command_runner.go:130] >       "username": "nonroot",
	I0815 18:05:42.745123   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745127   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745131   50711 command_runner.go:130] >     },
	I0815 18:05:42.745136   50711 command_runner.go:130] >     {
	I0815 18:05:42.745142   50711 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0815 18:05:42.745148   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745153   50711 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0815 18:05:42.745158   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745167   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745175   50711 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0815 18:05:42.745184   50711 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0815 18:05:42.745192   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745198   50711 command_runner.go:130] >       "size": "149009664",
	I0815 18:05:42.745202   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.745207   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.745211   50711 command_runner.go:130] >       },
	I0815 18:05:42.745217   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745221   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745227   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745230   50711 command_runner.go:130] >     },
	I0815 18:05:42.745236   50711 command_runner.go:130] >     {
	I0815 18:05:42.745242   50711 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0815 18:05:42.745248   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745252   50711 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0815 18:05:42.745258   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745262   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745271   50711 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0815 18:05:42.745281   50711 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0815 18:05:42.745287   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745292   50711 command_runner.go:130] >       "size": "95233506",
	I0815 18:05:42.745297   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.745302   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.745307   50711 command_runner.go:130] >       },
	I0815 18:05:42.745310   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745316   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745320   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745326   50711 command_runner.go:130] >     },
	I0815 18:05:42.745329   50711 command_runner.go:130] >     {
	I0815 18:05:42.745337   50711 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0815 18:05:42.745343   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745350   50711 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0815 18:05:42.745356   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745359   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745380   50711 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0815 18:05:42.745395   50711 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0815 18:05:42.745401   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745410   50711 command_runner.go:130] >       "size": "89437512",
	I0815 18:05:42.745416   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.745420   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.745426   50711 command_runner.go:130] >       },
	I0815 18:05:42.745429   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745433   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745436   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745440   50711 command_runner.go:130] >     },
	I0815 18:05:42.745443   50711 command_runner.go:130] >     {
	I0815 18:05:42.745448   50711 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0815 18:05:42.745452   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745456   50711 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0815 18:05:42.745460   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745464   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745471   50711 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0815 18:05:42.745477   50711 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0815 18:05:42.745481   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745485   50711 command_runner.go:130] >       "size": "92728217",
	I0815 18:05:42.745488   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.745491   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745495   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745498   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745501   50711 command_runner.go:130] >     },
	I0815 18:05:42.745505   50711 command_runner.go:130] >     {
	I0815 18:05:42.745510   50711 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0815 18:05:42.745514   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745518   50711 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0815 18:05:42.745521   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745525   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745534   50711 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0815 18:05:42.745543   50711 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0815 18:05:42.745549   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745553   50711 command_runner.go:130] >       "size": "68420936",
	I0815 18:05:42.745559   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.745567   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.745574   50711 command_runner.go:130] >       },
	I0815 18:05:42.745578   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745584   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745588   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.745594   50711 command_runner.go:130] >     },
	I0815 18:05:42.745597   50711 command_runner.go:130] >     {
	I0815 18:05:42.745604   50711 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0815 18:05:42.745609   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.745614   50711 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0815 18:05:42.745620   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745624   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.745635   50711 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0815 18:05:42.745643   50711 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0815 18:05:42.745647   50711 command_runner.go:130] >       ],
	I0815 18:05:42.745651   50711 command_runner.go:130] >       "size": "742080",
	I0815 18:05:42.745654   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.745661   50711 command_runner.go:130] >         "value": "65535"
	I0815 18:05:42.745664   50711 command_runner.go:130] >       },
	I0815 18:05:42.745668   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.745672   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.745675   50711 command_runner.go:130] >       "pinned": true
	I0815 18:05:42.745678   50711 command_runner.go:130] >     }
	I0815 18:05:42.745681   50711 command_runner.go:130] >   ]
	I0815 18:05:42.745686   50711 command_runner.go:130] > }
	I0815 18:05:42.745957   50711 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:05:42.745972   50711 crio.go:433] Images already preloaded, skipping extraction
	I0815 18:05:42.746014   50711 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:05:42.778673   50711 command_runner.go:130] > {
	I0815 18:05:42.778695   50711 command_runner.go:130] >   "images": [
	I0815 18:05:42.778699   50711 command_runner.go:130] >     {
	I0815 18:05:42.778707   50711 command_runner.go:130] >       "id": "917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557",
	I0815 18:05:42.778712   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.778718   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240730-75a5af0c"
	I0815 18:05:42.778721   50711 command_runner.go:130] >       ],
	I0815 18:05:42.778725   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.778733   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3",
	I0815 18:05:42.778740   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"
	I0815 18:05:42.778744   50711 command_runner.go:130] >       ],
	I0815 18:05:42.778749   50711 command_runner.go:130] >       "size": "87165492",
	I0815 18:05:42.778755   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.778759   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.778771   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.778778   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.778781   50711 command_runner.go:130] >     },
	I0815 18:05:42.778784   50711 command_runner.go:130] >     {
	I0815 18:05:42.778790   50711 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0815 18:05:42.778796   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.778802   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0815 18:05:42.778808   50711 command_runner.go:130] >       ],
	I0815 18:05:42.778812   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.778821   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0815 18:05:42.778830   50711 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0815 18:05:42.778836   50711 command_runner.go:130] >       ],
	I0815 18:05:42.778845   50711 command_runner.go:130] >       "size": "87190579",
	I0815 18:05:42.778851   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.778860   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.778867   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.778871   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.778877   50711 command_runner.go:130] >     },
	I0815 18:05:42.778888   50711 command_runner.go:130] >     {
	I0815 18:05:42.778896   50711 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0815 18:05:42.778901   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.778907   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0815 18:05:42.778912   50711 command_runner.go:130] >       ],
	I0815 18:05:42.778916   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.778925   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0815 18:05:42.778934   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0815 18:05:42.778937   50711 command_runner.go:130] >       ],
	I0815 18:05:42.778942   50711 command_runner.go:130] >       "size": "1363676",
	I0815 18:05:42.778946   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.778954   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.778958   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.778965   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.778968   50711 command_runner.go:130] >     },
	I0815 18:05:42.778974   50711 command_runner.go:130] >     {
	I0815 18:05:42.778980   50711 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0815 18:05:42.778987   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.778992   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0815 18:05:42.778998   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779002   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779011   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0815 18:05:42.779027   50711 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0815 18:05:42.779033   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779038   50711 command_runner.go:130] >       "size": "31470524",
	I0815 18:05:42.779044   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.779047   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779053   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779057   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779063   50711 command_runner.go:130] >     },
	I0815 18:05:42.779067   50711 command_runner.go:130] >     {
	I0815 18:05:42.779076   50711 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0815 18:05:42.779086   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779096   50711 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0815 18:05:42.779102   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779107   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779120   50711 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0815 18:05:42.779129   50711 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0815 18:05:42.779135   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779140   50711 command_runner.go:130] >       "size": "61245718",
	I0815 18:05:42.779145   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.779150   50711 command_runner.go:130] >       "username": "nonroot",
	I0815 18:05:42.779156   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779160   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779165   50711 command_runner.go:130] >     },
	I0815 18:05:42.779169   50711 command_runner.go:130] >     {
	I0815 18:05:42.779177   50711 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0815 18:05:42.779183   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779188   50711 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0815 18:05:42.779193   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779198   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779207   50711 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0815 18:05:42.779215   50711 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0815 18:05:42.779220   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779224   50711 command_runner.go:130] >       "size": "149009664",
	I0815 18:05:42.779230   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.779235   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.779240   50711 command_runner.go:130] >       },
	I0815 18:05:42.779244   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779250   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779254   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779260   50711 command_runner.go:130] >     },
	I0815 18:05:42.779263   50711 command_runner.go:130] >     {
	I0815 18:05:42.779271   50711 command_runner.go:130] >       "id": "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3",
	I0815 18:05:42.779277   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779282   50711 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.0"
	I0815 18:05:42.779288   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779292   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779301   50711 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf",
	I0815 18:05:42.779310   50711 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"
	I0815 18:05:42.779316   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779321   50711 command_runner.go:130] >       "size": "95233506",
	I0815 18:05:42.779336   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.779343   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.779346   50711 command_runner.go:130] >       },
	I0815 18:05:42.779350   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779354   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779358   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779361   50711 command_runner.go:130] >     },
	I0815 18:05:42.779365   50711 command_runner.go:130] >     {
	I0815 18:05:42.779372   50711 command_runner.go:130] >       "id": "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1",
	I0815 18:05:42.779376   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779383   50711 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.0"
	I0815 18:05:42.779387   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779393   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779414   50711 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d",
	I0815 18:05:42.779424   50711 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"
	I0815 18:05:42.779428   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779431   50711 command_runner.go:130] >       "size": "89437512",
	I0815 18:05:42.779435   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.779442   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.779445   50711 command_runner.go:130] >       },
	I0815 18:05:42.779450   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779453   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779460   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779464   50711 command_runner.go:130] >     },
	I0815 18:05:42.779469   50711 command_runner.go:130] >     {
	I0815 18:05:42.779474   50711 command_runner.go:130] >       "id": "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494",
	I0815 18:05:42.779480   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779485   50711 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.0"
	I0815 18:05:42.779491   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779497   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779506   50711 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf",
	I0815 18:05:42.779515   50711 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"
	I0815 18:05:42.779520   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779524   50711 command_runner.go:130] >       "size": "92728217",
	I0815 18:05:42.779531   50711 command_runner.go:130] >       "uid": null,
	I0815 18:05:42.779535   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779545   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779551   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779555   50711 command_runner.go:130] >     },
	I0815 18:05:42.779567   50711 command_runner.go:130] >     {
	I0815 18:05:42.779575   50711 command_runner.go:130] >       "id": "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94",
	I0815 18:05:42.779579   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779587   50711 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.0"
	I0815 18:05:42.779592   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779597   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779606   50711 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a",
	I0815 18:05:42.779615   50711 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"
	I0815 18:05:42.779620   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779627   50711 command_runner.go:130] >       "size": "68420936",
	I0815 18:05:42.779631   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.779637   50711 command_runner.go:130] >         "value": "0"
	I0815 18:05:42.779641   50711 command_runner.go:130] >       },
	I0815 18:05:42.779647   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779651   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779657   50711 command_runner.go:130] >       "pinned": false
	I0815 18:05:42.779660   50711 command_runner.go:130] >     },
	I0815 18:05:42.779666   50711 command_runner.go:130] >     {
	I0815 18:05:42.779672   50711 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0815 18:05:42.779678   50711 command_runner.go:130] >       "repoTags": [
	I0815 18:05:42.779683   50711 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0815 18:05:42.779688   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779693   50711 command_runner.go:130] >       "repoDigests": [
	I0815 18:05:42.779701   50711 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0815 18:05:42.779710   50711 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0815 18:05:42.779714   50711 command_runner.go:130] >       ],
	I0815 18:05:42.779720   50711 command_runner.go:130] >       "size": "742080",
	I0815 18:05:42.779724   50711 command_runner.go:130] >       "uid": {
	I0815 18:05:42.779728   50711 command_runner.go:130] >         "value": "65535"
	I0815 18:05:42.779732   50711 command_runner.go:130] >       },
	I0815 18:05:42.779735   50711 command_runner.go:130] >       "username": "",
	I0815 18:05:42.779741   50711 command_runner.go:130] >       "spec": null,
	I0815 18:05:42.779745   50711 command_runner.go:130] >       "pinned": true
	I0815 18:05:42.779752   50711 command_runner.go:130] >     }
	I0815 18:05:42.779758   50711 command_runner.go:130] >   ]
	I0815 18:05:42.779760   50711 command_runner.go:130] > }
	I0815 18:05:42.780137   50711 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:05:42.780157   50711 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:05:42.780165   50711 kubeadm.go:934] updating node { 192.168.39.73 8443 v1.31.0 crio true true} ...
	I0815 18:05:42.780261   50711 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-769827 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:multinode-769827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:05:42.780346   50711 ssh_runner.go:195] Run: crio config
	I0815 18:05:42.821002   50711 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0815 18:05:42.821039   50711 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0815 18:05:42.821050   50711 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0815 18:05:42.821056   50711 command_runner.go:130] > #
	I0815 18:05:42.821090   50711 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0815 18:05:42.821103   50711 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0815 18:05:42.821115   50711 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0815 18:05:42.821125   50711 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0815 18:05:42.821129   50711 command_runner.go:130] > # reload'.
	I0815 18:05:42.821135   50711 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0815 18:05:42.821141   50711 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0815 18:05:42.821148   50711 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0815 18:05:42.821154   50711 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0815 18:05:42.821159   50711 command_runner.go:130] > [crio]
	I0815 18:05:42.821171   50711 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0815 18:05:42.821179   50711 command_runner.go:130] > # containers images, in this directory.
	I0815 18:05:42.821188   50711 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0815 18:05:42.821201   50711 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0815 18:05:42.821225   50711 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0815 18:05:42.821240   50711 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0815 18:05:42.821443   50711 command_runner.go:130] > # imagestore = ""
	I0815 18:05:42.821460   50711 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0815 18:05:42.821470   50711 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0815 18:05:42.821788   50711 command_runner.go:130] > storage_driver = "overlay"
	I0815 18:05:42.821804   50711 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0815 18:05:42.821810   50711 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0815 18:05:42.821814   50711 command_runner.go:130] > storage_option = [
	I0815 18:05:42.822724   50711 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0815 18:05:42.822737   50711 command_runner.go:130] > ]
	I0815 18:05:42.822747   50711 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0815 18:05:42.822757   50711 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0815 18:05:42.822764   50711 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0815 18:05:42.822773   50711 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0815 18:05:42.822786   50711 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0815 18:05:42.822797   50711 command_runner.go:130] > # always happen on a node reboot
	I0815 18:05:42.822804   50711 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0815 18:05:42.822826   50711 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0815 18:05:42.822840   50711 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0815 18:05:42.822848   50711 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0815 18:05:42.822855   50711 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0815 18:05:42.822871   50711 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0815 18:05:42.822885   50711 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0815 18:05:42.822895   50711 command_runner.go:130] > # internal_wipe = true
	I0815 18:05:42.822907   50711 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0815 18:05:42.822919   50711 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0815 18:05:42.822929   50711 command_runner.go:130] > # internal_repair = false
	I0815 18:05:42.822936   50711 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0815 18:05:42.822943   50711 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0815 18:05:42.822949   50711 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0815 18:05:42.822958   50711 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0815 18:05:42.822967   50711 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0815 18:05:42.822985   50711 command_runner.go:130] > [crio.api]
	I0815 18:05:42.822997   50711 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0815 18:05:42.823008   50711 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0815 18:05:42.823022   50711 command_runner.go:130] > # IP address on which the stream server will listen.
	I0815 18:05:42.823032   50711 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0815 18:05:42.823055   50711 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0815 18:05:42.823069   50711 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0815 18:05:42.823075   50711 command_runner.go:130] > # stream_port = "0"
	I0815 18:05:42.823086   50711 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0815 18:05:42.823096   50711 command_runner.go:130] > # stream_enable_tls = false
	I0815 18:05:42.823103   50711 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0815 18:05:42.823109   50711 command_runner.go:130] > # stream_idle_timeout = ""
	I0815 18:05:42.823115   50711 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0815 18:05:42.823124   50711 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0815 18:05:42.823128   50711 command_runner.go:130] > # minutes.
	I0815 18:05:42.823134   50711 command_runner.go:130] > # stream_tls_cert = ""
	I0815 18:05:42.823140   50711 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0815 18:05:42.823151   50711 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0815 18:05:42.823159   50711 command_runner.go:130] > # stream_tls_key = ""
	I0815 18:05:42.823174   50711 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0815 18:05:42.823187   50711 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0815 18:05:42.823211   50711 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0815 18:05:42.823220   50711 command_runner.go:130] > # stream_tls_ca = ""
	I0815 18:05:42.823231   50711 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0815 18:05:42.823241   50711 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0815 18:05:42.823252   50711 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0815 18:05:42.823261   50711 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0815 18:05:42.823269   50711 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0815 18:05:42.823280   50711 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0815 18:05:42.823287   50711 command_runner.go:130] > [crio.runtime]
	I0815 18:05:42.823296   50711 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0815 18:05:42.823307   50711 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0815 18:05:42.823316   50711 command_runner.go:130] > # "nofile=1024:2048"
	I0815 18:05:42.823324   50711 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0815 18:05:42.823333   50711 command_runner.go:130] > # default_ulimits = [
	I0815 18:05:42.823338   50711 command_runner.go:130] > # ]
	I0815 18:05:42.823362   50711 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0815 18:05:42.823372   50711 command_runner.go:130] > # no_pivot = false
	I0815 18:05:42.823380   50711 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0815 18:05:42.823391   50711 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0815 18:05:42.823400   50711 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0815 18:05:42.823408   50711 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0815 18:05:42.823418   50711 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0815 18:05:42.823428   50711 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0815 18:05:42.823438   50711 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0815 18:05:42.823445   50711 command_runner.go:130] > # Cgroup setting for conmon
	I0815 18:05:42.823458   50711 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0815 18:05:42.823468   50711 command_runner.go:130] > conmon_cgroup = "pod"
	I0815 18:05:42.823477   50711 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0815 18:05:42.823487   50711 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0815 18:05:42.823497   50711 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0815 18:05:42.823506   50711 command_runner.go:130] > conmon_env = [
	I0815 18:05:42.823514   50711 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0815 18:05:42.823523   50711 command_runner.go:130] > ]
	I0815 18:05:42.823533   50711 command_runner.go:130] > # Additional environment variables to set for all the
	I0815 18:05:42.823544   50711 command_runner.go:130] > # containers. These are overridden if set in the
	I0815 18:05:42.823556   50711 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0815 18:05:42.823565   50711 command_runner.go:130] > # default_env = [
	I0815 18:05:42.823571   50711 command_runner.go:130] > # ]
	I0815 18:05:42.823583   50711 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0815 18:05:42.823596   50711 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0815 18:05:42.823605   50711 command_runner.go:130] > # selinux = false
	I0815 18:05:42.823615   50711 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0815 18:05:42.823628   50711 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0815 18:05:42.823640   50711 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0815 18:05:42.823650   50711 command_runner.go:130] > # seccomp_profile = ""
	I0815 18:05:42.823659   50711 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0815 18:05:42.823670   50711 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0815 18:05:42.823681   50711 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0815 18:05:42.823691   50711 command_runner.go:130] > # which might increase security.
	I0815 18:05:42.823702   50711 command_runner.go:130] > # This option is currently deprecated,
	I0815 18:05:42.823711   50711 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0815 18:05:42.823734   50711 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0815 18:05:42.823750   50711 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0815 18:05:42.823762   50711 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0815 18:05:42.823775   50711 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0815 18:05:42.823788   50711 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0815 18:05:42.823798   50711 command_runner.go:130] > # This option supports live configuration reload.
	I0815 18:05:42.823805   50711 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0815 18:05:42.823816   50711 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0815 18:05:42.823823   50711 command_runner.go:130] > # the cgroup blockio controller.
	I0815 18:05:42.823832   50711 command_runner.go:130] > # blockio_config_file = ""
	I0815 18:05:42.823843   50711 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0815 18:05:42.823852   50711 command_runner.go:130] > # blockio parameters.
	I0815 18:05:42.823858   50711 command_runner.go:130] > # blockio_reload = false
	I0815 18:05:42.823872   50711 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0815 18:05:42.823880   50711 command_runner.go:130] > # irqbalance daemon.
	I0815 18:05:42.823888   50711 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0815 18:05:42.823901   50711 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0815 18:05:42.823914   50711 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0815 18:05:42.823928   50711 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0815 18:05:42.823940   50711 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0815 18:05:42.823952   50711 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0815 18:05:42.823963   50711 command_runner.go:130] > # This option supports live configuration reload.
	I0815 18:05:42.823969   50711 command_runner.go:130] > # rdt_config_file = ""
	I0815 18:05:42.823981   50711 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0815 18:05:42.823987   50711 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0815 18:05:42.824030   50711 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0815 18:05:42.824042   50711 command_runner.go:130] > # separate_pull_cgroup = ""
	I0815 18:05:42.824052   50711 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0815 18:05:42.824065   50711 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0815 18:05:42.824073   50711 command_runner.go:130] > # will be added.
	I0815 18:05:42.824079   50711 command_runner.go:130] > # default_capabilities = [
	I0815 18:05:42.824088   50711 command_runner.go:130] > # 	"CHOWN",
	I0815 18:05:42.824099   50711 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0815 18:05:42.824108   50711 command_runner.go:130] > # 	"FSETID",
	I0815 18:05:42.824113   50711 command_runner.go:130] > # 	"FOWNER",
	I0815 18:05:42.824122   50711 command_runner.go:130] > # 	"SETGID",
	I0815 18:05:42.824136   50711 command_runner.go:130] > # 	"SETUID",
	I0815 18:05:42.824144   50711 command_runner.go:130] > # 	"SETPCAP",
	I0815 18:05:42.824152   50711 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0815 18:05:42.824156   50711 command_runner.go:130] > # 	"KILL",
	I0815 18:05:42.824159   50711 command_runner.go:130] > # ]
	I0815 18:05:42.824169   50711 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0815 18:05:42.824177   50711 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0815 18:05:42.824182   50711 command_runner.go:130] > # add_inheritable_capabilities = false
	I0815 18:05:42.824189   50711 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0815 18:05:42.824196   50711 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0815 18:05:42.824201   50711 command_runner.go:130] > default_sysctls = [
	I0815 18:05:42.824205   50711 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0815 18:05:42.824210   50711 command_runner.go:130] > ]
	I0815 18:05:42.824215   50711 command_runner.go:130] > # List of devices on the host that a
	I0815 18:05:42.824223   50711 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0815 18:05:42.824228   50711 command_runner.go:130] > # allowed_devices = [
	I0815 18:05:42.824231   50711 command_runner.go:130] > # 	"/dev/fuse",
	I0815 18:05:42.824237   50711 command_runner.go:130] > # ]
	I0815 18:05:42.824241   50711 command_runner.go:130] > # List of additional devices. specified as
	I0815 18:05:42.824250   50711 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0815 18:05:42.824257   50711 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0815 18:05:42.824263   50711 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0815 18:05:42.824268   50711 command_runner.go:130] > # additional_devices = [
	I0815 18:05:42.824272   50711 command_runner.go:130] > # ]
	I0815 18:05:42.824280   50711 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0815 18:05:42.824285   50711 command_runner.go:130] > # cdi_spec_dirs = [
	I0815 18:05:42.824291   50711 command_runner.go:130] > # 	"/etc/cdi",
	I0815 18:05:42.824296   50711 command_runner.go:130] > # 	"/var/run/cdi",
	I0815 18:05:42.824301   50711 command_runner.go:130] > # ]
	I0815 18:05:42.824308   50711 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0815 18:05:42.824315   50711 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0815 18:05:42.824322   50711 command_runner.go:130] > # Defaults to false.
	I0815 18:05:42.824326   50711 command_runner.go:130] > # device_ownership_from_security_context = false
	I0815 18:05:42.824332   50711 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0815 18:05:42.824339   50711 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0815 18:05:42.824343   50711 command_runner.go:130] > # hooks_dir = [
	I0815 18:05:42.824365   50711 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0815 18:05:42.824369   50711 command_runner.go:130] > # ]
	I0815 18:05:42.824374   50711 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0815 18:05:42.824382   50711 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0815 18:05:42.824389   50711 command_runner.go:130] > # its default mounts from the following two files:
	I0815 18:05:42.824393   50711 command_runner.go:130] > #
	I0815 18:05:42.824399   50711 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0815 18:05:42.824407   50711 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0815 18:05:42.824412   50711 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0815 18:05:42.824418   50711 command_runner.go:130] > #
	I0815 18:05:42.824423   50711 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0815 18:05:42.824432   50711 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0815 18:05:42.824438   50711 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0815 18:05:42.824445   50711 command_runner.go:130] > #      only add mounts it finds in this file.
	I0815 18:05:42.824448   50711 command_runner.go:130] > #
	I0815 18:05:42.824452   50711 command_runner.go:130] > # default_mounts_file = ""
	I0815 18:05:42.824459   50711 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0815 18:05:42.824468   50711 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0815 18:05:42.824472   50711 command_runner.go:130] > pids_limit = 1024
	I0815 18:05:42.824478   50711 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0815 18:05:42.824501   50711 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0815 18:05:42.824515   50711 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0815 18:05:42.824528   50711 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0815 18:05:42.824535   50711 command_runner.go:130] > # log_size_max = -1
	I0815 18:05:42.824541   50711 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0815 18:05:42.824547   50711 command_runner.go:130] > # log_to_journald = false
	I0815 18:05:42.824553   50711 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0815 18:05:42.824560   50711 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0815 18:05:42.824565   50711 command_runner.go:130] > # Path to directory for container attach sockets.
	I0815 18:05:42.824572   50711 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0815 18:05:42.824577   50711 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0815 18:05:42.824583   50711 command_runner.go:130] > # bind_mount_prefix = ""
	I0815 18:05:42.824588   50711 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0815 18:05:42.824594   50711 command_runner.go:130] > # read_only = false
	I0815 18:05:42.824600   50711 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0815 18:05:42.824608   50711 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0815 18:05:42.824619   50711 command_runner.go:130] > # live configuration reload.
	I0815 18:05:42.824626   50711 command_runner.go:130] > # log_level = "info"
	I0815 18:05:42.824631   50711 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0815 18:05:42.824645   50711 command_runner.go:130] > # This option supports live configuration reload.
	I0815 18:05:42.824651   50711 command_runner.go:130] > # log_filter = ""
	I0815 18:05:42.824657   50711 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0815 18:05:42.824667   50711 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0815 18:05:42.824673   50711 command_runner.go:130] > # separated by comma.
	I0815 18:05:42.824680   50711 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 18:05:42.824687   50711 command_runner.go:130] > # uid_mappings = ""
	I0815 18:05:42.824692   50711 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0815 18:05:42.824700   50711 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0815 18:05:42.824705   50711 command_runner.go:130] > # separated by comma.
	I0815 18:05:42.824712   50711 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 18:05:42.824718   50711 command_runner.go:130] > # gid_mappings = ""
	I0815 18:05:42.824724   50711 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0815 18:05:42.824735   50711 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0815 18:05:42.824743   50711 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0815 18:05:42.824750   50711 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 18:05:42.824756   50711 command_runner.go:130] > # minimum_mappable_uid = -1
	I0815 18:05:42.824762   50711 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0815 18:05:42.824770   50711 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0815 18:05:42.824778   50711 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0815 18:05:42.824786   50711 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0815 18:05:42.824792   50711 command_runner.go:130] > # minimum_mappable_gid = -1
	I0815 18:05:42.824798   50711 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0815 18:05:42.824806   50711 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0815 18:05:42.824817   50711 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0815 18:05:42.824823   50711 command_runner.go:130] > # ctr_stop_timeout = 30
	I0815 18:05:42.824828   50711 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0815 18:05:42.824835   50711 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0815 18:05:42.824840   50711 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0815 18:05:42.824847   50711 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0815 18:05:42.824851   50711 command_runner.go:130] > drop_infra_ctr = false
	I0815 18:05:42.824857   50711 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0815 18:05:42.824864   50711 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0815 18:05:42.824876   50711 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0815 18:05:42.824882   50711 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0815 18:05:42.824888   50711 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0815 18:05:42.824896   50711 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0815 18:05:42.824902   50711 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0815 18:05:42.824909   50711 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0815 18:05:42.824913   50711 command_runner.go:130] > # shared_cpuset = ""
	I0815 18:05:42.824919   50711 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0815 18:05:42.824925   50711 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0815 18:05:42.824929   50711 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0815 18:05:42.824938   50711 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0815 18:05:42.824944   50711 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0815 18:05:42.824949   50711 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0815 18:05:42.824957   50711 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0815 18:05:42.824964   50711 command_runner.go:130] > # enable_criu_support = false
	I0815 18:05:42.824969   50711 command_runner.go:130] > # Enable/disable the generation of the container,
	I0815 18:05:42.824976   50711 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0815 18:05:42.824981   50711 command_runner.go:130] > # enable_pod_events = false
	I0815 18:05:42.824987   50711 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0815 18:05:42.824995   50711 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0815 18:05:42.825000   50711 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0815 18:05:42.825006   50711 command_runner.go:130] > # default_runtime = "runc"
	I0815 18:05:42.825011   50711 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0815 18:05:42.825020   50711 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0815 18:05:42.825034   50711 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0815 18:05:42.825041   50711 command_runner.go:130] > # creation as a file is not desired either.
	I0815 18:05:42.825048   50711 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0815 18:05:42.825056   50711 command_runner.go:130] > # the hostname is being managed dynamically.
	I0815 18:05:42.825063   50711 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0815 18:05:42.825066   50711 command_runner.go:130] > # ]
	I0815 18:05:42.825074   50711 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0815 18:05:42.825081   50711 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0815 18:05:42.825089   50711 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0815 18:05:42.825094   50711 command_runner.go:130] > # Each entry in the table should follow the format:
	I0815 18:05:42.825100   50711 command_runner.go:130] > #
	I0815 18:05:42.825105   50711 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0815 18:05:42.825115   50711 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0815 18:05:42.825170   50711 command_runner.go:130] > # runtime_type = "oci"
	I0815 18:05:42.825183   50711 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0815 18:05:42.825190   50711 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0815 18:05:42.825195   50711 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0815 18:05:42.825205   50711 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0815 18:05:42.825212   50711 command_runner.go:130] > # monitor_env = []
	I0815 18:05:42.825216   50711 command_runner.go:130] > # privileged_without_host_devices = false
	I0815 18:05:42.825223   50711 command_runner.go:130] > # allowed_annotations = []
	I0815 18:05:42.825228   50711 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0815 18:05:42.825234   50711 command_runner.go:130] > # Where:
	I0815 18:05:42.825239   50711 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0815 18:05:42.825247   50711 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0815 18:05:42.825255   50711 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0815 18:05:42.825263   50711 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0815 18:05:42.825269   50711 command_runner.go:130] > #   in $PATH.
	I0815 18:05:42.825275   50711 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0815 18:05:42.825280   50711 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0815 18:05:42.825286   50711 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0815 18:05:42.825292   50711 command_runner.go:130] > #   state.
	I0815 18:05:42.825298   50711 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0815 18:05:42.825306   50711 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0815 18:05:42.825313   50711 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0815 18:05:42.825320   50711 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0815 18:05:42.825326   50711 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0815 18:05:42.825335   50711 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0815 18:05:42.825341   50711 command_runner.go:130] > #   The currently recognized values are:
	I0815 18:05:42.825354   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0815 18:05:42.825363   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0815 18:05:42.825369   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0815 18:05:42.825376   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0815 18:05:42.825385   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0815 18:05:42.825394   50711 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0815 18:05:42.825402   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0815 18:05:42.825409   50711 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0815 18:05:42.825416   50711 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0815 18:05:42.825432   50711 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0815 18:05:42.825438   50711 command_runner.go:130] > #   deprecated option "conmon".
	I0815 18:05:42.825445   50711 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0815 18:05:42.825452   50711 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0815 18:05:42.825458   50711 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0815 18:05:42.825465   50711 command_runner.go:130] > #   should be moved to the container's cgroup
	I0815 18:05:42.825471   50711 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0815 18:05:42.825478   50711 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0815 18:05:42.825485   50711 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0815 18:05:42.825492   50711 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0815 18:05:42.825498   50711 command_runner.go:130] > #
	I0815 18:05:42.825502   50711 command_runner.go:130] > # Using the seccomp notifier feature:
	I0815 18:05:42.825508   50711 command_runner.go:130] > #
	I0815 18:05:42.825514   50711 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0815 18:05:42.825522   50711 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0815 18:05:42.825528   50711 command_runner.go:130] > #
	I0815 18:05:42.825533   50711 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0815 18:05:42.825541   50711 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0815 18:05:42.825544   50711 command_runner.go:130] > #
	I0815 18:05:42.825550   50711 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0815 18:05:42.825555   50711 command_runner.go:130] > # feature.
	I0815 18:05:42.825558   50711 command_runner.go:130] > #
	I0815 18:05:42.825567   50711 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0815 18:05:42.825621   50711 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0815 18:05:42.825642   50711 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0815 18:05:42.825655   50711 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0815 18:05:42.825667   50711 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0815 18:05:42.825675   50711 command_runner.go:130] > #
	I0815 18:05:42.825688   50711 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0815 18:05:42.825699   50711 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0815 18:05:42.825707   50711 command_runner.go:130] > #
	I0815 18:05:42.825718   50711 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0815 18:05:42.825730   50711 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0815 18:05:42.825737   50711 command_runner.go:130] > #
	I0815 18:05:42.825745   50711 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0815 18:05:42.825756   50711 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0815 18:05:42.825779   50711 command_runner.go:130] > # limitation.
	I0815 18:05:42.825792   50711 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0815 18:05:42.825802   50711 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0815 18:05:42.825811   50711 command_runner.go:130] > runtime_type = "oci"
	I0815 18:05:42.825818   50711 command_runner.go:130] > runtime_root = "/run/runc"
	I0815 18:05:42.825827   50711 command_runner.go:130] > runtime_config_path = ""
	I0815 18:05:42.825835   50711 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0815 18:05:42.825843   50711 command_runner.go:130] > monitor_cgroup = "pod"
	I0815 18:05:42.825849   50711 command_runner.go:130] > monitor_exec_cgroup = ""
	I0815 18:05:42.825857   50711 command_runner.go:130] > monitor_env = [
	I0815 18:05:42.825866   50711 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0815 18:05:42.825871   50711 command_runner.go:130] > ]
	I0815 18:05:42.825876   50711 command_runner.go:130] > privileged_without_host_devices = false
	I0815 18:05:42.825885   50711 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0815 18:05:42.825892   50711 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0815 18:05:42.825899   50711 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0815 18:05:42.825909   50711 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0815 18:05:42.825917   50711 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0815 18:05:42.825925   50711 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0815 18:05:42.825941   50711 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0815 18:05:42.825956   50711 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0815 18:05:42.825968   50711 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0815 18:05:42.825981   50711 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0815 18:05:42.825987   50711 command_runner.go:130] > # Example:
	I0815 18:05:42.825995   50711 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0815 18:05:42.826004   50711 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0815 18:05:42.826011   50711 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0815 18:05:42.826017   50711 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0815 18:05:42.826020   50711 command_runner.go:130] > # cpuset = 0
	I0815 18:05:42.826024   50711 command_runner.go:130] > # cpushares = "0-1"
	I0815 18:05:42.826027   50711 command_runner.go:130] > # Where:
	I0815 18:05:42.826031   50711 command_runner.go:130] > # The workload name is workload-type.
	I0815 18:05:42.826038   50711 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0815 18:05:42.826043   50711 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0815 18:05:42.826049   50711 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0815 18:05:42.826056   50711 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0815 18:05:42.826069   50711 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0815 18:05:42.826074   50711 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0815 18:05:42.826084   50711 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0815 18:05:42.826088   50711 command_runner.go:130] > # Default value is set to true
	I0815 18:05:42.826092   50711 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0815 18:05:42.826098   50711 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0815 18:05:42.826102   50711 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0815 18:05:42.826107   50711 command_runner.go:130] > # Default value is set to 'false'
	I0815 18:05:42.826111   50711 command_runner.go:130] > # disable_hostport_mapping = false
	I0815 18:05:42.826116   50711 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0815 18:05:42.826120   50711 command_runner.go:130] > #
	I0815 18:05:42.826129   50711 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0815 18:05:42.826137   50711 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0815 18:05:42.826143   50711 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0815 18:05:42.826149   50711 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0815 18:05:42.826155   50711 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0815 18:05:42.826159   50711 command_runner.go:130] > [crio.image]
	I0815 18:05:42.826169   50711 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0815 18:05:42.826176   50711 command_runner.go:130] > # default_transport = "docker://"
	I0815 18:05:42.826182   50711 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0815 18:05:42.826190   50711 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0815 18:05:42.826194   50711 command_runner.go:130] > # global_auth_file = ""
	I0815 18:05:42.826199   50711 command_runner.go:130] > # The image used to instantiate infra containers.
	I0815 18:05:42.826207   50711 command_runner.go:130] > # This option supports live configuration reload.
	I0815 18:05:42.826212   50711 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0815 18:05:42.826221   50711 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0815 18:05:42.826227   50711 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0815 18:05:42.826234   50711 command_runner.go:130] > # This option supports live configuration reload.
	I0815 18:05:42.826238   50711 command_runner.go:130] > # pause_image_auth_file = ""
	I0815 18:05:42.826246   50711 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0815 18:05:42.826253   50711 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0815 18:05:42.826261   50711 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0815 18:05:42.826266   50711 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0815 18:05:42.826273   50711 command_runner.go:130] > # pause_command = "/pause"
	I0815 18:05:42.826279   50711 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0815 18:05:42.826287   50711 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0815 18:05:42.826297   50711 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0815 18:05:42.826308   50711 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0815 18:05:42.826313   50711 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0815 18:05:42.826320   50711 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0815 18:05:42.826324   50711 command_runner.go:130] > # pinned_images = [
	I0815 18:05:42.826328   50711 command_runner.go:130] > # ]
	I0815 18:05:42.826334   50711 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0815 18:05:42.826342   50711 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0815 18:05:42.826348   50711 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0815 18:05:42.826356   50711 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0815 18:05:42.826361   50711 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0815 18:05:42.826366   50711 command_runner.go:130] > # signature_policy = ""
	I0815 18:05:42.826371   50711 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0815 18:05:42.826383   50711 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0815 18:05:42.826391   50711 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0815 18:05:42.826396   50711 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0815 18:05:42.826404   50711 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0815 18:05:42.826416   50711 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0815 18:05:42.826424   50711 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0815 18:05:42.826430   50711 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0815 18:05:42.826436   50711 command_runner.go:130] > # changing them here.
	I0815 18:05:42.826440   50711 command_runner.go:130] > # insecure_registries = [
	I0815 18:05:42.826443   50711 command_runner.go:130] > # ]
	I0815 18:05:42.826450   50711 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0815 18:05:42.826458   50711 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0815 18:05:42.826464   50711 command_runner.go:130] > # image_volumes = "mkdir"
	I0815 18:05:42.826475   50711 command_runner.go:130] > # Temporary directory to use for storing big files
	I0815 18:05:42.826482   50711 command_runner.go:130] > # big_files_temporary_dir = ""
	I0815 18:05:42.826491   50711 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0815 18:05:42.826496   50711 command_runner.go:130] > # CNI plugins.
	I0815 18:05:42.826502   50711 command_runner.go:130] > [crio.network]
	I0815 18:05:42.826508   50711 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0815 18:05:42.826514   50711 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0815 18:05:42.826518   50711 command_runner.go:130] > # cni_default_network = ""
	I0815 18:05:42.826523   50711 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0815 18:05:42.826530   50711 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0815 18:05:42.826540   50711 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0815 18:05:42.826546   50711 command_runner.go:130] > # plugin_dirs = [
	I0815 18:05:42.826550   50711 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0815 18:05:42.826560   50711 command_runner.go:130] > # ]
	I0815 18:05:42.826568   50711 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0815 18:05:42.826572   50711 command_runner.go:130] > [crio.metrics]
	I0815 18:05:42.826578   50711 command_runner.go:130] > # Globally enable or disable metrics support.
	I0815 18:05:42.826582   50711 command_runner.go:130] > enable_metrics = true
	I0815 18:05:42.826588   50711 command_runner.go:130] > # Specify enabled metrics collectors.
	I0815 18:05:42.826593   50711 command_runner.go:130] > # Per default all metrics are enabled.
	I0815 18:05:42.826599   50711 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0815 18:05:42.826607   50711 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0815 18:05:42.826615   50711 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0815 18:05:42.826619   50711 command_runner.go:130] > # metrics_collectors = [
	I0815 18:05:42.826624   50711 command_runner.go:130] > # 	"operations",
	I0815 18:05:42.826628   50711 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0815 18:05:42.826635   50711 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0815 18:05:42.826639   50711 command_runner.go:130] > # 	"operations_errors",
	I0815 18:05:42.826645   50711 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0815 18:05:42.826649   50711 command_runner.go:130] > # 	"image_pulls_by_name",
	I0815 18:05:42.826656   50711 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0815 18:05:42.826660   50711 command_runner.go:130] > # 	"image_pulls_failures",
	I0815 18:05:42.826664   50711 command_runner.go:130] > # 	"image_pulls_successes",
	I0815 18:05:42.826668   50711 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0815 18:05:42.826672   50711 command_runner.go:130] > # 	"image_layer_reuse",
	I0815 18:05:42.826676   50711 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0815 18:05:42.826680   50711 command_runner.go:130] > # 	"containers_oom_total",
	I0815 18:05:42.826684   50711 command_runner.go:130] > # 	"containers_oom",
	I0815 18:05:42.826688   50711 command_runner.go:130] > # 	"processes_defunct",
	I0815 18:05:42.826692   50711 command_runner.go:130] > # 	"operations_total",
	I0815 18:05:42.826696   50711 command_runner.go:130] > # 	"operations_latency_seconds",
	I0815 18:05:42.826700   50711 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0815 18:05:42.826706   50711 command_runner.go:130] > # 	"operations_errors_total",
	I0815 18:05:42.826710   50711 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0815 18:05:42.826721   50711 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0815 18:05:42.826727   50711 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0815 18:05:42.826740   50711 command_runner.go:130] > # 	"image_pulls_success_total",
	I0815 18:05:42.826747   50711 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0815 18:05:42.826751   50711 command_runner.go:130] > # 	"containers_oom_count_total",
	I0815 18:05:42.826758   50711 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0815 18:05:42.826762   50711 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0815 18:05:42.826766   50711 command_runner.go:130] > # ]
	I0815 18:05:42.826771   50711 command_runner.go:130] > # The port on which the metrics server will listen.
	I0815 18:05:42.826777   50711 command_runner.go:130] > # metrics_port = 9090
	I0815 18:05:42.826783   50711 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0815 18:05:42.826787   50711 command_runner.go:130] > # metrics_socket = ""
	I0815 18:05:42.826792   50711 command_runner.go:130] > # The certificate for the secure metrics server.
	I0815 18:05:42.826800   50711 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0815 18:05:42.826806   50711 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0815 18:05:42.826813   50711 command_runner.go:130] > # certificate on any modification event.
	I0815 18:05:42.826816   50711 command_runner.go:130] > # metrics_cert = ""
	I0815 18:05:42.826821   50711 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0815 18:05:42.826827   50711 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0815 18:05:42.826831   50711 command_runner.go:130] > # metrics_key = ""
	I0815 18:05:42.826837   50711 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0815 18:05:42.826842   50711 command_runner.go:130] > [crio.tracing]
	I0815 18:05:42.826847   50711 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0815 18:05:42.826852   50711 command_runner.go:130] > # enable_tracing = false
	I0815 18:05:42.826857   50711 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0815 18:05:42.826862   50711 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0815 18:05:42.826869   50711 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0815 18:05:42.826875   50711 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0815 18:05:42.826879   50711 command_runner.go:130] > # CRI-O NRI configuration.
	I0815 18:05:42.826884   50711 command_runner.go:130] > [crio.nri]
	I0815 18:05:42.826888   50711 command_runner.go:130] > # Globally enable or disable NRI.
	I0815 18:05:42.826892   50711 command_runner.go:130] > # enable_nri = false
	I0815 18:05:42.826896   50711 command_runner.go:130] > # NRI socket to listen on.
	I0815 18:05:42.826901   50711 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0815 18:05:42.826905   50711 command_runner.go:130] > # NRI plugin directory to use.
	I0815 18:05:42.826912   50711 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0815 18:05:42.826916   50711 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0815 18:05:42.826923   50711 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0815 18:05:42.826933   50711 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0815 18:05:42.826939   50711 command_runner.go:130] > # nri_disable_connections = false
	I0815 18:05:42.826944   50711 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0815 18:05:42.826950   50711 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0815 18:05:42.826955   50711 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0815 18:05:42.826966   50711 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0815 18:05:42.826973   50711 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0815 18:05:42.826977   50711 command_runner.go:130] > [crio.stats]
	I0815 18:05:42.826984   50711 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0815 18:05:42.826989   50711 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0815 18:05:42.826995   50711 command_runner.go:130] > # stats_collection_period = 0
	I0815 18:05:42.827026   50711 command_runner.go:130] ! time="2024-08-15 18:05:42.792146362Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0815 18:05:42.827039   50711 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0815 18:05:42.827210   50711 cni.go:84] Creating CNI manager for ""
	I0815 18:05:42.827225   50711 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0815 18:05:42.827236   50711 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:05:42.827257   50711 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.73 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-769827 NodeName:multinode-769827 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:05:42.827390   50711 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-769827"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:05:42.827460   50711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:05:42.837909   50711 command_runner.go:130] > kubeadm
	I0815 18:05:42.837933   50711 command_runner.go:130] > kubectl
	I0815 18:05:42.837940   50711 command_runner.go:130] > kubelet
	I0815 18:05:42.838002   50711 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:05:42.838055   50711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:05:42.847779   50711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0815 18:05:42.864141   50711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:05:42.880904   50711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0815 18:05:42.897589   50711 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I0815 18:05:42.901265   50711 command_runner.go:130] > 192.168.39.73	control-plane.minikube.internal
	I0815 18:05:42.901371   50711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:05:43.037297   50711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:05:43.051889   50711 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827 for IP: 192.168.39.73
	I0815 18:05:43.051914   50711 certs.go:194] generating shared ca certs ...
	I0815 18:05:43.051929   50711 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:05:43.052087   50711 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:05:43.052131   50711 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:05:43.052142   50711 certs.go:256] generating profile certs ...
	I0815 18:05:43.052217   50711 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/client.key
	I0815 18:05:43.052273   50711 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/apiserver.key.f6f8ed09
	I0815 18:05:43.052309   50711 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/proxy-client.key
	I0815 18:05:43.052320   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0815 18:05:43.052334   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0815 18:05:43.052359   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0815 18:05:43.052372   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0815 18:05:43.052383   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0815 18:05:43.052397   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0815 18:05:43.052409   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0815 18:05:43.052418   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0815 18:05:43.052465   50711 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:05:43.052522   50711 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:05:43.052534   50711 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:05:43.052556   50711 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:05:43.052580   50711 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:05:43.052603   50711 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:05:43.052651   50711 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:05:43.052683   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:05:43.052696   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem -> /usr/share/ca-certificates/20219.pem
	I0815 18:05:43.052708   50711 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> /usr/share/ca-certificates/202192.pem
	I0815 18:05:43.053263   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:05:43.078370   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:05:43.101934   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:05:43.125482   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:05:43.149530   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 18:05:43.173909   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:05:43.198716   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:05:43.222387   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/multinode-769827/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:05:43.246363   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:05:43.271418   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:05:43.295460   50711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:05:43.318095   50711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:05:43.334196   50711 ssh_runner.go:195] Run: openssl version
	I0815 18:05:43.339708   50711 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0815 18:05:43.339932   50711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:05:43.350623   50711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:05:43.354876   50711 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:05:43.354987   50711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:05:43.355036   50711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:05:43.360400   50711 command_runner.go:130] > b5213941
	I0815 18:05:43.360459   50711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:05:43.369911   50711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:05:43.381189   50711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:05:43.385543   50711 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:05:43.385568   50711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:05:43.385606   50711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:05:43.391173   50711 command_runner.go:130] > 51391683
	I0815 18:05:43.391237   50711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:05:43.400738   50711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:05:43.411924   50711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:05:43.416455   50711 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:05:43.416507   50711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:05:43.416556   50711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:05:43.422079   50711 command_runner.go:130] > 3ec20f2e
	I0815 18:05:43.422245   50711 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:05:43.431802   50711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:05:43.436161   50711 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:05:43.436185   50711 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0815 18:05:43.436191   50711 command_runner.go:130] > Device: 253,1	Inode: 1056278     Links: 1
	I0815 18:05:43.436197   50711 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0815 18:05:43.436204   50711 command_runner.go:130] > Access: 2024-08-15 17:58:53.814641207 +0000
	I0815 18:05:43.436209   50711 command_runner.go:130] > Modify: 2024-08-15 17:58:53.814641207 +0000
	I0815 18:05:43.436214   50711 command_runner.go:130] > Change: 2024-08-15 17:58:53.814641207 +0000
	I0815 18:05:43.436219   50711 command_runner.go:130] >  Birth: 2024-08-15 17:58:53.814641207 +0000
	I0815 18:05:43.436263   50711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:05:43.441545   50711 command_runner.go:130] > Certificate will not expire
	I0815 18:05:43.441855   50711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:05:43.447044   50711 command_runner.go:130] > Certificate will not expire
	I0815 18:05:43.447088   50711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:05:43.453109   50711 command_runner.go:130] > Certificate will not expire
	I0815 18:05:43.453214   50711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:05:43.458506   50711 command_runner.go:130] > Certificate will not expire
	I0815 18:05:43.458737   50711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:05:43.464216   50711 command_runner.go:130] > Certificate will not expire
	I0815 18:05:43.464264   50711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:05:43.469715   50711 command_runner.go:130] > Certificate will not expire
	I0815 18:05:43.469830   50711 kubeadm.go:392] StartCluster: {Name:multinode-769827 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
0 ClusterName:multinode-769827 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.143 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:05:43.469935   50711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:05:43.469980   50711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:05:43.505575   50711 command_runner.go:130] > 65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f
	I0815 18:05:43.505606   50711 command_runner.go:130] > 6badbf1f14b9ca4398e327977be81890eb1c1984b86cde46404b7619f7efe3f0
	I0815 18:05:43.505617   50711 command_runner.go:130] > 29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77
	I0815 18:05:43.505626   50711 command_runner.go:130] > fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee
	I0815 18:05:43.505632   50711 command_runner.go:130] > 99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0
	I0815 18:05:43.505637   50711 command_runner.go:130] > 006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469
	I0815 18:05:43.505643   50711 command_runner.go:130] > 75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed
	I0815 18:05:43.505725   50711 command_runner.go:130] > 77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a
	I0815 18:05:43.507200   50711 cri.go:89] found id: "65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f"
	I0815 18:05:43.507216   50711 cri.go:89] found id: "6badbf1f14b9ca4398e327977be81890eb1c1984b86cde46404b7619f7efe3f0"
	I0815 18:05:43.507220   50711 cri.go:89] found id: "29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77"
	I0815 18:05:43.507223   50711 cri.go:89] found id: "fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee"
	I0815 18:05:43.507225   50711 cri.go:89] found id: "99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0"
	I0815 18:05:43.507228   50711 cri.go:89] found id: "006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469"
	I0815 18:05:43.507231   50711 cri.go:89] found id: "75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed"
	I0815 18:05:43.507233   50711 cri.go:89] found id: "77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a"
	I0815 18:05:43.507236   50711 cri.go:89] found id: ""
	I0815 18:05:43.507278   50711 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.815190485Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745392815166540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bad3e325-7c70-4a07-99e5-8bd9d633be8e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.815716621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe7b2c56-973e-4019-a877-87993ec75093 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.815769930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe7b2c56-973e-4019-a877-87993ec75093 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.816662287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bcbc2400a6bc10fb610629028a1f580df8eacbef273dc1e2887b8bd355ec1dc,PodSandboxId:9654f0dfe2b2a329465e665bdd9df6552b82358853f9d1e0f4e1981499d6da86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723745183912408214,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c133435cb4e31b26fc4a909ae3c3199af3cccda5810b2ca8937b4860708ebb3b,PodSandboxId:e840391611d579fe922b750bbd748741c31482297e55bfc9dfd0911959b2fac2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723745150315984903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:051882e6acf4abe9e919becabd10a96d0f189085f5390fdb0e8e12113ed62a81,PodSandboxId:7a0cec1e6c28dcb19cd8b1da08bd7b126ab05e1f6b64db8c986daf7812f0dd68,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723745150371025382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73e37bc2dbdb91ad306b4047d3db2d22e0197f1c9778b06d4b967201c83286a,PodSandboxId:545bd160e23a18d5731c8c633c7ae1688f55e9e12629e86d4dc0e4dd9cc185a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723745150230337519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd5e3abd823c15ad5896347396d30ad6519b1209e5b4c1a886706d0489ed082,PodSandboxId:90f33d9a591545ba2d39caa46f713554d1b2900c512d3565ef6fe47b8fde1b63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723745150199396711,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9907340fbd8cc209ebb3f5fa117f1000cfd6cf09830b4e6100a0a08d0015716,PodSandboxId:28a7289752b82ecc293f6fd997b71c299c83c818769b3c14a0838b8b9d22da8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723745146331754623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:704afc72580d06b4fe0dccbfd7555c08d6f40ffe914a25d4a364c4b84ce5ccb5,PodSandboxId:d4e1240abe3624ac34dd05e70a13e47ddae0933c71b4fd8caa742d49ed62c63c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723745146348109190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c69af92d63ad4a21ab07081944894a987e424ff5f5b2023f89830f44a6cd7d6,PodSandboxId:1a75875531d2399ccd8fcbc6d0ca93411197449248ca0cb5bbfc5b67b3454bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723745146315478663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8123420b5cbe4bb676a5a956fe125b3e54508669d4d317a2985ab9174ee33dfc,PodSandboxId:9d2d5e1d08314d5d95c09908a2ee37d9d3a5b1655ad1371c8a541779e783cb32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723745146276141066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d22d50a70ac6f51b2ef8ac44cd4dbf68940601ef90765ab9de8e21a11150e97,PodSandboxId:3404e4fc3eb6d4d9a1cbfb7f2711143d681d2a4e4b10789bd040c634792ce33e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723744819620335698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6badbf1f14b9ca4398e327977be81890eb1c1984b86cde46404b7619f7efe3f0,PodSandboxId:a2d11e45774519fab256b4fbc1a928e4a3707d7a808fd4ee3b6f3ccb789a2b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723744763874119067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f,PodSandboxId:f59a1504b34b8697570eed7aecc8fdbdbb66072afde97e2472575ecafeaad732,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723744763894025525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77,PodSandboxId:09bb5321f840662e81b7010b62d9865f05b0a4d1f63eba891803debbcf8730f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723744752109328605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee,PodSandboxId:89e5bd0b5f5109af0f557ff83b42b4f63edb836582e4cbf67efcc233c3734ce2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723744748053403168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469,PodSandboxId:ba409cd10440ace2c71920dfb0f78837a7d4f4341912a854ba354d08ebb1d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723744737137852356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0,PodSandboxId:640ede242ce296a21c033c6132a422480853c49db339d1819f7a1b3ffc3622bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723744737149133677,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:
map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed,PodSandboxId:879bc13e372e6829d7418e5d905d3e17c0e2da3152713d4f5eb29f20edd7a18a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723744737046724472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a,PodSandboxId:40647a4b20092c0585cff98c0321844713b1037dd6991ae0318706a5a7e14751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723744736988914968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe7b2c56-973e-4019-a877-87993ec75093 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.863881659Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=503ba71e-18ed-4dc6-82b5-3ca1ecf14266 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.863963674Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=503ba71e-18ed-4dc6-82b5-3ca1ecf14266 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.864994341Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5272e965-37ba-4bcf-88d7-c87b3ad0299f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.865391906Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745392865369514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5272e965-37ba-4bcf-88d7-c87b3ad0299f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.865915575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e0efc04-0065-4a1e-a236-a5772760fc8e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.865973770Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e0efc04-0065-4a1e-a236-a5772760fc8e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.866310700Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bcbc2400a6bc10fb610629028a1f580df8eacbef273dc1e2887b8bd355ec1dc,PodSandboxId:9654f0dfe2b2a329465e665bdd9df6552b82358853f9d1e0f4e1981499d6da86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723745183912408214,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c133435cb4e31b26fc4a909ae3c3199af3cccda5810b2ca8937b4860708ebb3b,PodSandboxId:e840391611d579fe922b750bbd748741c31482297e55bfc9dfd0911959b2fac2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723745150315984903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:051882e6acf4abe9e919becabd10a96d0f189085f5390fdb0e8e12113ed62a81,PodSandboxId:7a0cec1e6c28dcb19cd8b1da08bd7b126ab05e1f6b64db8c986daf7812f0dd68,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723745150371025382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73e37bc2dbdb91ad306b4047d3db2d22e0197f1c9778b06d4b967201c83286a,PodSandboxId:545bd160e23a18d5731c8c633c7ae1688f55e9e12629e86d4dc0e4dd9cc185a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723745150230337519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd5e3abd823c15ad5896347396d30ad6519b1209e5b4c1a886706d0489ed082,PodSandboxId:90f33d9a591545ba2d39caa46f713554d1b2900c512d3565ef6fe47b8fde1b63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723745150199396711,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9907340fbd8cc209ebb3f5fa117f1000cfd6cf09830b4e6100a0a08d0015716,PodSandboxId:28a7289752b82ecc293f6fd997b71c299c83c818769b3c14a0838b8b9d22da8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723745146331754623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:704afc72580d06b4fe0dccbfd7555c08d6f40ffe914a25d4a364c4b84ce5ccb5,PodSandboxId:d4e1240abe3624ac34dd05e70a13e47ddae0933c71b4fd8caa742d49ed62c63c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723745146348109190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c69af92d63ad4a21ab07081944894a987e424ff5f5b2023f89830f44a6cd7d6,PodSandboxId:1a75875531d2399ccd8fcbc6d0ca93411197449248ca0cb5bbfc5b67b3454bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723745146315478663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8123420b5cbe4bb676a5a956fe125b3e54508669d4d317a2985ab9174ee33dfc,PodSandboxId:9d2d5e1d08314d5d95c09908a2ee37d9d3a5b1655ad1371c8a541779e783cb32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723745146276141066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d22d50a70ac6f51b2ef8ac44cd4dbf68940601ef90765ab9de8e21a11150e97,PodSandboxId:3404e4fc3eb6d4d9a1cbfb7f2711143d681d2a4e4b10789bd040c634792ce33e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723744819620335698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6badbf1f14b9ca4398e327977be81890eb1c1984b86cde46404b7619f7efe3f0,PodSandboxId:a2d11e45774519fab256b4fbc1a928e4a3707d7a808fd4ee3b6f3ccb789a2b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723744763874119067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f,PodSandboxId:f59a1504b34b8697570eed7aecc8fdbdbb66072afde97e2472575ecafeaad732,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723744763894025525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77,PodSandboxId:09bb5321f840662e81b7010b62d9865f05b0a4d1f63eba891803debbcf8730f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723744752109328605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee,PodSandboxId:89e5bd0b5f5109af0f557ff83b42b4f63edb836582e4cbf67efcc233c3734ce2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723744748053403168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469,PodSandboxId:ba409cd10440ace2c71920dfb0f78837a7d4f4341912a854ba354d08ebb1d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723744737137852356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0,PodSandboxId:640ede242ce296a21c033c6132a422480853c49db339d1819f7a1b3ffc3622bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723744737149133677,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:
map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed,PodSandboxId:879bc13e372e6829d7418e5d905d3e17c0e2da3152713d4f5eb29f20edd7a18a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723744737046724472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a,PodSandboxId:40647a4b20092c0585cff98c0321844713b1037dd6991ae0318706a5a7e14751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723744736988914968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e0efc04-0065-4a1e-a236-a5772760fc8e name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.908951007Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6cb01159-c3ff-4da1-b6bb-a937ad99eb2a name=/runtime.v1.RuntimeService/Version
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.909022177Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6cb01159-c3ff-4da1-b6bb-a937ad99eb2a name=/runtime.v1.RuntimeService/Version
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.910545554Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1345dc7-75ae-48a5-9dbf-0fe17ce0a749 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.911164372Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745392911140618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1345dc7-75ae-48a5-9dbf-0fe17ce0a749 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.911715786Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc6a67a7-477e-4f98-9888-e9a662db12c5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.911776615Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc6a67a7-477e-4f98-9888-e9a662db12c5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.912114628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bcbc2400a6bc10fb610629028a1f580df8eacbef273dc1e2887b8bd355ec1dc,PodSandboxId:9654f0dfe2b2a329465e665bdd9df6552b82358853f9d1e0f4e1981499d6da86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723745183912408214,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c133435cb4e31b26fc4a909ae3c3199af3cccda5810b2ca8937b4860708ebb3b,PodSandboxId:e840391611d579fe922b750bbd748741c31482297e55bfc9dfd0911959b2fac2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723745150315984903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:051882e6acf4abe9e919becabd10a96d0f189085f5390fdb0e8e12113ed62a81,PodSandboxId:7a0cec1e6c28dcb19cd8b1da08bd7b126ab05e1f6b64db8c986daf7812f0dd68,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723745150371025382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73e37bc2dbdb91ad306b4047d3db2d22e0197f1c9778b06d4b967201c83286a,PodSandboxId:545bd160e23a18d5731c8c633c7ae1688f55e9e12629e86d4dc0e4dd9cc185a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723745150230337519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd5e3abd823c15ad5896347396d30ad6519b1209e5b4c1a886706d0489ed082,PodSandboxId:90f33d9a591545ba2d39caa46f713554d1b2900c512d3565ef6fe47b8fde1b63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723745150199396711,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9907340fbd8cc209ebb3f5fa117f1000cfd6cf09830b4e6100a0a08d0015716,PodSandboxId:28a7289752b82ecc293f6fd997b71c299c83c818769b3c14a0838b8b9d22da8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723745146331754623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:704afc72580d06b4fe0dccbfd7555c08d6f40ffe914a25d4a364c4b84ce5ccb5,PodSandboxId:d4e1240abe3624ac34dd05e70a13e47ddae0933c71b4fd8caa742d49ed62c63c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723745146348109190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c69af92d63ad4a21ab07081944894a987e424ff5f5b2023f89830f44a6cd7d6,PodSandboxId:1a75875531d2399ccd8fcbc6d0ca93411197449248ca0cb5bbfc5b67b3454bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723745146315478663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8123420b5cbe4bb676a5a956fe125b3e54508669d4d317a2985ab9174ee33dfc,PodSandboxId:9d2d5e1d08314d5d95c09908a2ee37d9d3a5b1655ad1371c8a541779e783cb32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723745146276141066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d22d50a70ac6f51b2ef8ac44cd4dbf68940601ef90765ab9de8e21a11150e97,PodSandboxId:3404e4fc3eb6d4d9a1cbfb7f2711143d681d2a4e4b10789bd040c634792ce33e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723744819620335698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6badbf1f14b9ca4398e327977be81890eb1c1984b86cde46404b7619f7efe3f0,PodSandboxId:a2d11e45774519fab256b4fbc1a928e4a3707d7a808fd4ee3b6f3ccb789a2b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723744763874119067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f,PodSandboxId:f59a1504b34b8697570eed7aecc8fdbdbb66072afde97e2472575ecafeaad732,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723744763894025525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77,PodSandboxId:09bb5321f840662e81b7010b62d9865f05b0a4d1f63eba891803debbcf8730f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723744752109328605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee,PodSandboxId:89e5bd0b5f5109af0f557ff83b42b4f63edb836582e4cbf67efcc233c3734ce2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723744748053403168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469,PodSandboxId:ba409cd10440ace2c71920dfb0f78837a7d4f4341912a854ba354d08ebb1d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723744737137852356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0,PodSandboxId:640ede242ce296a21c033c6132a422480853c49db339d1819f7a1b3ffc3622bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723744737149133677,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:
map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed,PodSandboxId:879bc13e372e6829d7418e5d905d3e17c0e2da3152713d4f5eb29f20edd7a18a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723744737046724472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a,PodSandboxId:40647a4b20092c0585cff98c0321844713b1037dd6991ae0318706a5a7e14751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723744736988914968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc6a67a7-477e-4f98-9888-e9a662db12c5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.953787667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7e4a3f43-74f8-4b3b-b14d-2db58b261110 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.953860576Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7e4a3f43-74f8-4b3b-b14d-2db58b261110 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.954874645Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea5cc998-763a-4716-a5a3-47c7856eeeca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.955322268Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745392955297699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea5cc998-763a-4716-a5a3-47c7856eeeca name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.955839210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a12c3047-ddd2-4f46-ab8a-8917a1f6d717 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.955892639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a12c3047-ddd2-4f46-ab8a-8917a1f6d717 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:09:52 multinode-769827 crio[2772]: time="2024-08-15 18:09:52.956227393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6bcbc2400a6bc10fb610629028a1f580df8eacbef273dc1e2887b8bd355ec1dc,PodSandboxId:9654f0dfe2b2a329465e665bdd9df6552b82358853f9d1e0f4e1981499d6da86,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1723745183912408214,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c133435cb4e31b26fc4a909ae3c3199af3cccda5810b2ca8937b4860708ebb3b,PodSandboxId:e840391611d579fe922b750bbd748741c31482297e55bfc9dfd0911959b2fac2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1723745150315984903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:051882e6acf4abe9e919becabd10a96d0f189085f5390fdb0e8e12113ed62a81,PodSandboxId:7a0cec1e6c28dcb19cd8b1da08bd7b126ab05e1f6b64db8c986daf7812f0dd68,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723745150371025382,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d73e37bc2dbdb91ad306b4047d3db2d22e0197f1c9778b06d4b967201c83286a,PodSandboxId:545bd160e23a18d5731c8c633c7ae1688f55e9e12629e86d4dc0e4dd9cc185a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723745150230337519,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},An
notations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dd5e3abd823c15ad5896347396d30ad6519b1209e5b4c1a886706d0489ed082,PodSandboxId:90f33d9a591545ba2d39caa46f713554d1b2900c512d3565ef6fe47b8fde1b63,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723745150199396711,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.ku
bernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9907340fbd8cc209ebb3f5fa117f1000cfd6cf09830b4e6100a0a08d0015716,PodSandboxId:28a7289752b82ecc293f6fd997b71c299c83c818769b3c14a0838b8b9d22da8c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723745146331754623,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:704afc72580d06b4fe0dccbfd7555c08d6f40ffe914a25d4a364c4b84ce5ccb5,PodSandboxId:d4e1240abe3624ac34dd05e70a13e47ddae0933c71b4fd8caa742d49ed62c63c,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723745146348109190,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c69af92d63ad4a21ab07081944894a987e424ff5f5b2023f89830f44a6cd7d6,PodSandboxId:1a75875531d2399ccd8fcbc6d0ca93411197449248ca0cb5bbfc5b67b3454bbe,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723745146315478663,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1
,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8123420b5cbe4bb676a5a956fe125b3e54508669d4d317a2985ab9174ee33dfc,PodSandboxId:9d2d5e1d08314d5d95c09908a2ee37d9d3a5b1655ad1371c8a541779e783cb32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723745146276141066,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.re
startCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d22d50a70ac6f51b2ef8ac44cd4dbf68940601ef90765ab9de8e21a11150e97,PodSandboxId:3404e4fc3eb6d4d9a1cbfb7f2711143d681d2a4e4b10789bd040c634792ce33e,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1723744819620335698,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-jrvlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6334cc80-573e-44e5-af31-6b4d0f980464,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.rest
artCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6badbf1f14b9ca4398e327977be81890eb1c1984b86cde46404b7619f7efe3f0,PodSandboxId:a2d11e45774519fab256b4fbc1a928e4a3707d7a808fd4ee3b6f3ccb789a2b1a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723744763874119067,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 28aaef04-f10c-45e9-b729-5bec128a6557,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f,PodSandboxId:f59a1504b34b8697570eed7aecc8fdbdbb66072afde97e2472575ecafeaad732,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723744763894025525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-d5zq9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5677c98-8d22-46bd-8cae-ddfc9debe01d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"
protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77,PodSandboxId:09bb5321f840662e81b7010b62d9865f05b0a4d1f63eba891803debbcf8730f7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1723744752109328605,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-wt8bf,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: eb5cfdea-9990-438d-b897-918f067a63b7,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee,PodSandboxId:89e5bd0b5f5109af0f557ff83b42b4f63edb836582e4cbf67efcc233c3734ce2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723744748053403168,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hh9zj,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 726d20f9-339c-4e84-b02f-84c948567d44,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469,PodSandboxId:ba409cd10440ace2c71920dfb0f78837a7d4f4341912a854ba354d08ebb1d7bd,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723744737137852356,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66422cec44d660fee6875520686adfc6
,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0,PodSandboxId:640ede242ce296a21c033c6132a422480853c49db339d1819f7a1b3ffc3622bb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723744737149133677,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ced81a9a71afb4e10e170a456d312b6,},Annotations:
map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed,PodSandboxId:879bc13e372e6829d7418e5d905d3e17c0e2da3152713d4f5eb29f20edd7a18a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723744737046724472,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae6775d3617d4763c89fdef0a6d920ea,},
Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a,PodSandboxId:40647a4b20092c0585cff98c0321844713b1037dd6991ae0318706a5a7e14751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723744736988914968,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-769827,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b81e0ea5b7dcb2108f53774cb6dfd40a,},Annotations:map
[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a12c3047-ddd2-4f46-ab8a-8917a1f6d717 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6bcbc2400a6bc       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   9654f0dfe2b2a       busybox-7dff88458-jrvlv
	051882e6acf4a       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   7a0cec1e6c28d       coredns-6f6b679f8f-d5zq9
	c133435cb4e31       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   e840391611d57       kindnet-wt8bf
	d73e37bc2dbdb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   545bd160e23a1       storage-provisioner
	5dd5e3abd823c       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      4 minutes ago       Running             kube-proxy                1                   90f33d9a59154       kube-proxy-hh9zj
	704afc72580d0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   d4e1240abe362       etcd-multinode-769827
	f9907340fbd8c       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      4 minutes ago       Running             kube-scheduler            1                   28a7289752b82       kube-scheduler-multinode-769827
	0c69af92d63ad       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      4 minutes ago       Running             kube-apiserver            1                   1a75875531d23       kube-apiserver-multinode-769827
	8123420b5cbe4       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      4 minutes ago       Running             kube-controller-manager   1                   9d2d5e1d08314       kube-controller-manager-multinode-769827
	4d22d50a70ac6       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   3404e4fc3eb6d       busybox-7dff88458-jrvlv
	65ef23da92ccf       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      10 minutes ago      Exited              coredns                   0                   f59a1504b34b8       coredns-6f6b679f8f-d5zq9
	6badbf1f14b9c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   a2d11e4577451       storage-provisioner
	29c39838952dc       docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b    10 minutes ago      Exited              kindnet-cni               0                   09bb5321f8406       kindnet-wt8bf
	fbe2ea6e1d672       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      10 minutes ago      Exited              kube-proxy                0                   89e5bd0b5f510       kube-proxy-hh9zj
	99b3bcdf65e5f       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      10 minutes ago      Exited              kube-apiserver            0                   640ede242ce29       kube-apiserver-multinode-769827
	006f9c6202ca9       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   ba409cd10440a       etcd-multinode-769827
	75cd818d80b96       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      10 minutes ago      Exited              kube-controller-manager   0                   879bc13e372e6       kube-controller-manager-multinode-769827
	77661e4bf365e       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      10 minutes ago      Exited              kube-scheduler            0                   40647a4b20092       kube-scheduler-multinode-769827
	
	
	==> coredns [051882e6acf4abe9e919becabd10a96d0f189085f5390fdb0e8e12113ed62a81] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57737 - 30846 "HINFO IN 613272218464715039.6391360153872604405. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015756917s
	
	
	==> coredns [65ef23da92ccf48f7ca3381a06dadc7d16f706e1a876db62a12c9c8f24bf686f] <==
	[INFO] 10.244.1.2:36465 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001479288s
	[INFO] 10.244.1.2:59000 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106516s
	[INFO] 10.244.1.2:57277 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061901s
	[INFO] 10.244.1.2:56816 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001158854s
	[INFO] 10.244.1.2:60901 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128073s
	[INFO] 10.244.1.2:46179 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056327s
	[INFO] 10.244.1.2:52150 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051547s
	[INFO] 10.244.0.3:53403 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000096801s
	[INFO] 10.244.0.3:59707 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000039357s
	[INFO] 10.244.0.3:40454 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051731s
	[INFO] 10.244.0.3:39818 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000029676s
	[INFO] 10.244.1.2:55990 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122435s
	[INFO] 10.244.1.2:33756 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080074s
	[INFO] 10.244.1.2:52274 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103858s
	[INFO] 10.244.1.2:57630 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080723s
	[INFO] 10.244.0.3:58766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103835s
	[INFO] 10.244.0.3:37671 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000085473s
	[INFO] 10.244.0.3:42401 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00033564s
	[INFO] 10.244.0.3:39167 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188704s
	[INFO] 10.244.1.2:34856 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139374s
	[INFO] 10.244.1.2:41841 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000108964s
	[INFO] 10.244.1.2:56881 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000107879s
	[INFO] 10.244.1.2:53178 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000161933s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-769827
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-769827
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=multinode-769827
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T17_59_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:58:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-769827
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:09:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 18:05:49 +0000   Thu, 15 Aug 2024 17:58:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 18:05:49 +0000   Thu, 15 Aug 2024 17:58:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 18:05:49 +0000   Thu, 15 Aug 2024 17:58:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 18:05:49 +0000   Thu, 15 Aug 2024 17:59:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    multinode-769827
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e15ffbed288486092b0fdf6bedd0076
	  System UUID:                4e15ffbe-d288-4860-92b0-fdf6bedd0076
	  Boot ID:                    40b4a32b-9d7d-4a8d-9166-0a48755633cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-jrvlv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kube-system                 coredns-6f6b679f8f-d5zq9                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-769827                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-wt8bf                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-769827             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-769827    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-hh9zj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-769827             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 10m                  kube-proxy       
	  Normal  Starting                 4m2s                 kube-proxy       
	  Normal  NodeHasSufficientPID     10m                  kubelet          Node multinode-769827 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                  kubelet          Node multinode-769827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                  kubelet          Node multinode-769827 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                  node-controller  Node multinode-769827 event: Registered Node multinode-769827 in Controller
	  Normal  NodeReady                10m                  kubelet          Node multinode-769827 status is now: NodeReady
	  Normal  Starting                 4m8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet          Node multinode-769827 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m8s (x8 over 4m8s)  kubelet          Node multinode-769827 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet          Node multinode-769827 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node multinode-769827 event: Registered Node multinode-769827 in Controller
	
	
	Name:               multinode-769827-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-769827-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=multinode-769827
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_15T18_06_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 18:06:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-769827-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:07:31 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 15 Aug 2024 18:07:00 +0000   Thu, 15 Aug 2024 18:08:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 15 Aug 2024 18:07:00 +0000   Thu, 15 Aug 2024 18:08:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 15 Aug 2024 18:07:00 +0000   Thu, 15 Aug 2024 18:08:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 15 Aug 2024 18:07:00 +0000   Thu, 15 Aug 2024 18:08:12 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    multinode-769827-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e711d8d406f4f60b3dcd5552dea75d6
	  System UUID:                5e711d8d-406f-4f60-b3dc-d5552dea75d6
	  Boot ID:                    0a046e00-c2cc-43e7-a84d-b460d2c4f4b5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7pwdg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 kindnet-b7s6v              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m58s
	  kube-system                 kube-proxy-cwn29           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 9m54s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m58s (x2 over 9m59s)  kubelet          Node multinode-769827-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m58s (x2 over 9m59s)  kubelet          Node multinode-769827-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m58s (x2 over 9m59s)  kubelet          Node multinode-769827-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m39s                  kubelet          Node multinode-769827-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m23s (x2 over 3m23s)  kubelet          Node multinode-769827-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x2 over 3m23s)  kubelet          Node multinode-769827-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x2 over 3m23s)  kubelet          Node multinode-769827-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-769827-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                   node-controller  Node multinode-769827-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.068024] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.214059] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.133232] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.292170] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +3.955161] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +3.828498] systemd-fstab-generator[885]: Ignoring "noauto" option for root device
	[  +0.064114] kauditd_printk_skb: 158 callbacks suppressed
	[Aug15 17:59] systemd-fstab-generator[1225]: Ignoring "noauto" option for root device
	[  +0.093748] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.462854] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.133574] systemd-fstab-generator[1336]: Ignoring "noauto" option for root device
	[  +5.262886] kauditd_printk_skb: 59 callbacks suppressed
	[Aug15 18:00] kauditd_printk_skb: 12 callbacks suppressed
	[Aug15 18:05] systemd-fstab-generator[2690]: Ignoring "noauto" option for root device
	[  +0.155129] systemd-fstab-generator[2702]: Ignoring "noauto" option for root device
	[  +0.168418] systemd-fstab-generator[2716]: Ignoring "noauto" option for root device
	[  +0.141315] systemd-fstab-generator[2728]: Ignoring "noauto" option for root device
	[  +0.291455] systemd-fstab-generator[2756]: Ignoring "noauto" option for root device
	[  +5.500315] systemd-fstab-generator[2856]: Ignoring "noauto" option for root device
	[  +0.083284] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.429177] systemd-fstab-generator[2980]: Ignoring "noauto" option for root device
	[  +4.624578] kauditd_printk_skb: 74 callbacks suppressed
	[  +7.812681] kauditd_printk_skb: 34 callbacks suppressed
	[Aug15 18:06] systemd-fstab-generator[3819]: Ignoring "noauto" option for root device
	[ +18.146233] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [006f9c6202ca91a6ab085082525950a17a4c56d4a703f00eaa5ad79917be0469] <==
	{"level":"info","ts":"2024-08-15T17:59:55.096789Z","caller":"traceutil/trace.go:171","msg":"trace[240708071] linearizableReadLoop","detail":"{readStateIndex:462; appliedIndex:461; }","duration":"136.366175ms","start":"2024-08-15T17:59:54.960393Z","end":"2024-08-15T17:59:55.096759Z","steps":["trace[240708071] 'read index received'  (duration: 23.461µs)","trace[240708071] 'applied index is now lower than readState.Index'  (duration: 136.340902ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T17:59:55.096944Z","caller":"traceutil/trace.go:171","msg":"trace[1919787891] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"235.435522ms","start":"2024-08-15T17:59:54.861497Z","end":"2024-08-15T17:59:55.096932Z","steps":["trace[1919787891] 'process raft request'  (duration: 87.662459ms)","trace[1919787891] 'compare'  (duration: 146.731994ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T17:59:55.097254Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"136.850909ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-769827-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T17:59:55.097341Z","caller":"traceutil/trace.go:171","msg":"trace[1680751135] range","detail":"{range_begin:/registry/minions/multinode-769827-m02; range_end:; response_count:0; response_revision:442; }","duration":"136.935176ms","start":"2024-08-15T17:59:54.960389Z","end":"2024-08-15T17:59:55.097324Z","steps":["trace[1680751135] 'agreement among raft nodes before linearized reading'  (duration: 136.8082ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:00:00.402997Z","caller":"traceutil/trace.go:171","msg":"trace[1278451307] linearizableReadLoop","detail":"{readStateIndex:503; appliedIndex:502; }","duration":"142.148778ms","start":"2024-08-15T18:00:00.260833Z","end":"2024-08-15T18:00:00.402982Z","steps":["trace[1278451307] 'read index received'  (duration: 141.99755ms)","trace[1278451307] 'applied index is now lower than readState.Index'  (duration: 150.722µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:00:00.403176Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.314544ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/\" range_end:\"/registry/namespaces0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-08-15T18:00:00.403220Z","caller":"traceutil/trace.go:171","msg":"trace[151106694] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; response_count:0; response_revision:481; }","duration":"142.384437ms","start":"2024-08-15T18:00:00.260829Z","end":"2024-08-15T18:00:00.403213Z","steps":["trace[151106694] 'agreement among raft nodes before linearized reading'  (duration: 142.295595ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:00:00.403236Z","caller":"traceutil/trace.go:171","msg":"trace[1693998488] transaction","detail":"{read_only:false; response_revision:481; number_of_response:1; }","duration":"155.506903ms","start":"2024-08-15T18:00:00.247716Z","end":"2024-08-15T18:00:00.403223Z","steps":["trace[1693998488] 'process raft request'  (duration: 155.157007ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:00:50.806192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.4541ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9419438424321490062 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-769827-m03.17ebf8cf0cdf8797\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-769827-m03.17ebf8cf0cdf8797\" value_size:646 lease:196066387466713866 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-08-15T18:00:50.806391Z","caller":"traceutil/trace.go:171","msg":"trace[2086854109] linearizableReadLoop","detail":"{readStateIndex:608; appliedIndex:607; }","duration":"209.124861ms","start":"2024-08-15T18:00:50.597233Z","end":"2024-08-15T18:00:50.806358Z","steps":["trace[2086854109] 'read index received'  (duration: 54.306132ms)","trace[2086854109] 'applied index is now lower than readState.Index'  (duration: 154.817804ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T18:00:50.806504Z","caller":"traceutil/trace.go:171","msg":"trace[1053483193] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"230.59659ms","start":"2024-08-15T18:00:50.575891Z","end":"2024-08-15T18:00:50.806487Z","steps":["trace[1053483193] 'process raft request'  (duration: 75.680765ms)","trace[1053483193] 'compare'  (duration: 154.333666ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:00:50.806846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.607844ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-769827-m03\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:00:50.806953Z","caller":"traceutil/trace.go:171","msg":"trace[1078551274] range","detail":"{range_begin:/registry/minions/multinode-769827-m03; range_end:; response_count:0; response_revision:575; }","duration":"209.712271ms","start":"2024-08-15T18:00:50.597228Z","end":"2024-08-15T18:00:50.806941Z","steps":["trace[1078551274] 'agreement among raft nodes before linearized reading'  (duration: 209.592282ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:00:50.806846Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"169.547762ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2024-08-15T18:00:50.807092Z","caller":"traceutil/trace.go:171","msg":"trace[2094307122] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:575; }","duration":"169.808609ms","start":"2024-08-15T18:00:50.637275Z","end":"2024-08-15T18:00:50.807084Z","steps":["trace[2094307122] 'agreement among raft nodes before linearized reading'  (duration: 169.402237ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:04:05.417942Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-15T18:04:05.418080Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-769827","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.73:2380"],"advertise-client-urls":["https://192.168.39.73:2379"]}
	{"level":"warn","ts":"2024-08-15T18:04:05.418319Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T18:04:05.418441Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T18:04:05.470490Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.73:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T18:04:05.470550Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.73:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T18:04:05.470665Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"217be714ae9a82b8","current-leader-member-id":"217be714ae9a82b8"}
	{"level":"info","ts":"2024-08-15T18:04:05.477521Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2024-08-15T18:04:05.477742Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2024-08-15T18:04:05.477778Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-769827","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.73:2380"],"advertise-client-urls":["https://192.168.39.73:2379"]}
	
	
	==> etcd [704afc72580d06b4fe0dccbfd7555c08d6f40ffe914a25d4a364c4b84ce5ccb5] <==
	{"level":"info","ts":"2024-08-15T18:05:46.816006Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"97141299b087eff6","local-member-id":"217be714ae9a82b8","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:05:46.816049Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:05:46.820790Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:05:46.827931Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T18:05:46.828218Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"217be714ae9a82b8","initial-advertise-peer-urls":["https://192.168.39.73:2380"],"listen-peer-urls":["https://192.168.39.73:2380"],"advertise-client-urls":["https://192.168.39.73:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.73:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T18:05:46.828263Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T18:05:46.828329Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2024-08-15T18:05:46.828351Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2024-08-15T18:05:47.820328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T18:05:47.820368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T18:05:47.820435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 received MsgPreVoteResp from 217be714ae9a82b8 at term 2"}
	{"level":"info","ts":"2024-08-15T18:05:47.820450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T18:05:47.820456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 received MsgVoteResp from 217be714ae9a82b8 at term 3"}
	{"level":"info","ts":"2024-08-15T18:05:47.820475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became leader at term 3"}
	{"level":"info","ts":"2024-08-15T18:05:47.820482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 217be714ae9a82b8 elected leader 217be714ae9a82b8 at term 3"}
	{"level":"info","ts":"2024-08-15T18:05:47.826215Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:05:47.827367Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:05:47.828308Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.73:2379"}
	{"level":"info","ts":"2024-08-15T18:05:47.828747Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:05:47.829364Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:05:47.830271Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T18:05:47.826162Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"217be714ae9a82b8","local-member-attributes":"{Name:multinode-769827 ClientURLs:[https://192.168.39.73:2379]}","request-path":"/0/members/217be714ae9a82b8/attributes","cluster-id":"97141299b087eff6","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T18:05:47.831805Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T18:05:47.831836Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T18:07:11.204120Z","caller":"traceutil/trace.go:171","msg":"trace[248975640] transaction","detail":"{read_only:false; response_revision:1132; number_of_response:1; }","duration":"125.788156ms","start":"2024-08-15T18:07:11.078291Z","end":"2024-08-15T18:07:11.204079Z","steps":["trace[248975640] 'process raft request'  (duration: 125.671057ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:09:53 up 11 min,  0 users,  load average: 0.45, 0.31, 0.13
	Linux multinode-769827 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [29c39838952dccb6bce840a2ee26e580879a03fc91b69b5799021857d3cefd77] <==
	I0815 18:03:23.237420       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.3.0/24] 
	I0815 18:03:33.233320       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:03:33.233347       1 main.go:299] handling current node
	I0815 18:03:33.233361       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:03:33.233366       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:03:33.233495       1 main.go:295] Handling node with IPs: map[192.168.39.143:{}]
	I0815 18:03:33.233519       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.3.0/24] 
	I0815 18:03:43.227704       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:03:43.227754       1 main.go:299] handling current node
	I0815 18:03:43.227768       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:03:43.227774       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:03:43.227932       1 main.go:295] Handling node with IPs: map[192.168.39.143:{}]
	I0815 18:03:43.227960       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.3.0/24] 
	I0815 18:03:53.236730       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:03:53.236833       1 main.go:299] handling current node
	I0815 18:03:53.236863       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:03:53.236882       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:03:53.237021       1 main.go:295] Handling node with IPs: map[192.168.39.143:{}]
	I0815 18:03:53.237044       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.3.0/24] 
	I0815 18:04:03.235722       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:04:03.235779       1 main.go:299] handling current node
	I0815 18:04:03.235803       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:04:03.235811       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:04:03.235956       1 main.go:295] Handling node with IPs: map[192.168.39.143:{}]
	I0815 18:04:03.235980       1 main.go:322] Node multinode-769827-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c133435cb4e31b26fc4a909ae3c3199af3cccda5810b2ca8937b4860708ebb3b] <==
	I0815 18:08:51.334511       1 main.go:299] handling current node
	I0815 18:09:01.342832       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:09:01.342947       1 main.go:299] handling current node
	I0815 18:09:01.342979       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:09:01.343009       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:09:11.335079       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:09:11.335115       1 main.go:299] handling current node
	I0815 18:09:11.335128       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:09:11.335133       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:09:21.336268       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:09:21.336361       1 main.go:299] handling current node
	I0815 18:09:21.336390       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:09:21.336407       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:09:31.344067       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:09:31.344156       1 main.go:299] handling current node
	I0815 18:09:31.344182       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:09:31.344200       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:09:41.342837       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:09:41.342931       1 main.go:299] handling current node
	I0815 18:09:41.342960       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:09:41.342978       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	I0815 18:09:51.334106       1 main.go:295] Handling node with IPs: map[192.168.39.73:{}]
	I0815 18:09:51.334230       1 main.go:299] handling current node
	I0815 18:09:51.334260       1 main.go:295] Handling node with IPs: map[192.168.39.245:{}]
	I0815 18:09:51.334280       1 main.go:322] Node multinode-769827-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [0c69af92d63ad4a21ab07081944894a987e424ff5f5b2023f89830f44a6cd7d6] <==
	I0815 18:05:49.192142       1 aggregator.go:171] initial CRD sync complete...
	I0815 18:05:49.192243       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 18:05:49.192256       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 18:05:49.226015       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 18:05:49.236123       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 18:05:49.236164       1 policy_source.go:224] refreshing policies
	I0815 18:05:49.238170       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 18:05:49.285568       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 18:05:49.289494       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 18:05:49.291562       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 18:05:49.289502       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 18:05:49.289511       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 18:05:49.292108       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 18:05:49.294887       1 cache.go:39] Caches are synced for autoregister controller
	I0815 18:05:49.294903       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 18:05:49.303168       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0815 18:05:49.304132       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0815 18:05:50.098195       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 18:05:51.272387       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 18:05:51.410210       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 18:05:51.422993       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 18:05:51.482930       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 18:05:51.489180       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 18:05:52.811848       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 18:05:52.961510       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [99b3bcdf65e5fb06aaa650fe996547a2bde9f8e0e73ab36742c73a07dbbeebd0] <==
	W0815 18:04:05.459249       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.459344       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.459434       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.459466       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.459557       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460310       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460351       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460444       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460530       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460705       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460744       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460832       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460928       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.460959       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.461399       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.461511       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464190       1 logging.go:55] [core] [Channel #136 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464270       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464305       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464336       1 logging.go:55] [core] [Channel #85 SubChannel #86]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464376       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464410       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464443       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464478       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:04:05.464516       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [75cd818d80b964ce34d14741c96681820656d13c40877bbade9496f9b94c83ed] <==
	I0815 18:01:38.646880       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:38.647009       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-769827-m02"
	I0815 18:01:40.183093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-769827-m02"
	I0815 18:01:40.183150       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-769827-m03\" does not exist"
	I0815 18:01:40.198205       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-769827-m03" podCIDRs=["10.244.3.0/24"]
	I0815 18:01:40.198245       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:40.198266       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:40.207147       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:40.220286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:40.542003       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:41.556654       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:50.284481       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:01:59.988105       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-769827-m02"
	I0815 18:01:59.988359       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:02:00.004846       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:02:01.468177       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:02:41.489960       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-769827-m03"
	I0815 18:02:41.491852       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m02"
	I0815 18:02:41.496322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:02:41.515005       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m02"
	I0815 18:02:41.526662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:02:41.544679       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.40158ms"
	I0815 18:02:41.545226       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.793µs"
	I0815 18:02:46.558650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m02"
	I0815 18:02:56.634498       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	
	
	==> kube-controller-manager [8123420b5cbe4bb676a5a956fe125b3e54508669d4d317a2985ab9174ee33dfc] <==
	I0815 18:07:07.936748       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-769827-m03" podCIDRs=["10.244.2.0/24"]
	I0815 18:07:07.936785       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:07.936935       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:07.946565       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:08.379029       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:08.725991       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:12.883356       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:18.140512       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:26.591750       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-769827-m03"
	I0815 18:07:26.591890       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:26.600410       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:27.782569       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:31.350069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:31.364185       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:31.814461       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m03"
	I0815 18:07:31.814532       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-769827-m02"
	I0815 18:08:12.804915       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m02"
	I0815 18:08:12.828326       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m02"
	I0815 18:08:12.842817       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.699215ms"
	I0815 18:08:12.844156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="238.212µs"
	I0815 18:08:17.893371       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-769827-m02"
	I0815 18:08:32.727455       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-bbf9m"
	I0815 18:08:32.753897       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-bbf9m"
	I0815 18:08:32.753940       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4lmfs"
	I0815 18:08:32.820561       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-4lmfs"
	
	
	==> kube-proxy [5dd5e3abd823c15ad5896347396d30ad6519b1209e5b4c1a886706d0489ed082] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 18:05:50.627723       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 18:05:50.645543       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.73"]
	E0815 18:05:50.645729       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 18:05:50.706716       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 18:05:50.706887       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 18:05:50.706976       1 server_linux.go:169] "Using iptables Proxier"
	I0815 18:05:50.709674       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 18:05:50.709978       1 server.go:483] "Version info" version="v1.31.0"
	I0815 18:05:50.710135       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:05:50.711529       1 config.go:197] "Starting service config controller"
	I0815 18:05:50.711958       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 18:05:50.712025       1 config.go:104] "Starting endpoint slice config controller"
	I0815 18:05:50.712043       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 18:05:50.712581       1 config.go:326] "Starting node config controller"
	I0815 18:05:50.713268       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 18:05:50.812782       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 18:05:50.812825       1 shared_informer.go:320] Caches are synced for service config
	I0815 18:05:50.814312       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [fbe2ea6e1d672f39c911a8d732098852eecc3d3d5177d08b2e67d8dd78b838ee] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 17:59:08.500017       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 17:59:08.515944       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.73"]
	E0815 17:59:08.516024       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:59:08.560879       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 17:59:08.560930       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 17:59:08.560967       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:59:08.567268       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:59:08.567518       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:59:08.567549       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:59:08.570884       1 config.go:197] "Starting service config controller"
	I0815 17:59:08.570935       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:59:08.570960       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:59:08.570964       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:59:08.571664       1 config.go:326] "Starting node config controller"
	I0815 17:59:08.571690       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:59:08.671813       1 shared_informer.go:320] Caches are synced for node config
	I0815 17:59:08.671847       1 shared_informer.go:320] Caches are synced for service config
	I0815 17:59:08.671860       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [77661e4bf365eb272b89f4fb53f0a55cb4cf83e97ba5e928e13bd0cf5a3b229a] <==
	E0815 17:58:59.592443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:58:59.592401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 17:58:59.592662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.493836       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 17:59:00.493940       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.558358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 17:59:00.558488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.672788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 17:59:00.672840       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.737548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 17:59:00.737637       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.758971       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:59:00.759020       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.761825       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 17:59:00.761869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.777501       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 17:59:00.777553       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.778795       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 17:59:00.778834       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 17:59:00.781247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 17:59:00.781285       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:59:00.877722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 17:59:00.877773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0815 17:59:02.787867       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 18:04:05.427522       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f9907340fbd8cc209ebb3f5fa117f1000cfd6cf09830b4e6100a0a08d0015716] <==
	I0815 18:05:47.655158       1 serving.go:386] Generated self-signed cert in-memory
	I0815 18:05:49.259327       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 18:05:49.259578       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:05:49.266770       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 18:05:49.267113       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0815 18:05:49.267222       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0815 18:05:49.267326       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 18:05:49.268853       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 18:05:49.268952       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 18:05:49.269056       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0815 18:05:49.269080       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0815 18:05:49.367795       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0815 18:05:49.369148       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 18:05:49.369413       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Aug 15 18:08:35 multinode-769827 kubelet[2987]: E0815 18:08:35.780018    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745315779368492,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:08:45 multinode-769827 kubelet[2987]: E0815 18:08:45.694761    2987 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 18:08:45 multinode-769827 kubelet[2987]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 18:08:45 multinode-769827 kubelet[2987]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 18:08:45 multinode-769827 kubelet[2987]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 18:08:45 multinode-769827 kubelet[2987]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 18:08:45 multinode-769827 kubelet[2987]: E0815 18:08:45.785918    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745325782751382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:08:45 multinode-769827 kubelet[2987]: E0815 18:08:45.786076    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745325782751382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:08:55 multinode-769827 kubelet[2987]: E0815 18:08:55.791919    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745335789991497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:08:55 multinode-769827 kubelet[2987]: E0815 18:08:55.791971    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745335789991497,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:09:05 multinode-769827 kubelet[2987]: E0815 18:09:05.793390    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745345793010022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:09:05 multinode-769827 kubelet[2987]: E0815 18:09:05.793418    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745345793010022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:09:15 multinode-769827 kubelet[2987]: E0815 18:09:15.795012    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745355794746553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:09:15 multinode-769827 kubelet[2987]: E0815 18:09:15.795036    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745355794746553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:09:25 multinode-769827 kubelet[2987]: E0815 18:09:25.800021    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745365797013307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:09:25 multinode-769827 kubelet[2987]: E0815 18:09:25.800091    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745365797013307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:09:35 multinode-769827 kubelet[2987]: E0815 18:09:35.802151    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745375801546953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:09:35 multinode-769827 kubelet[2987]: E0815 18:09:35.802484    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745375801546953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:09:45 multinode-769827 kubelet[2987]: E0815 18:09:45.695329    2987 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 18:09:45 multinode-769827 kubelet[2987]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 18:09:45 multinode-769827 kubelet[2987]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 18:09:45 multinode-769827 kubelet[2987]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 18:09:45 multinode-769827 kubelet[2987]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 18:09:45 multinode-769827 kubelet[2987]: E0815 18:09:45.804108    2987 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745385803552366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:09:45 multinode-769827 kubelet[2987]: E0815 18:09:45.804134    2987 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745385803552366,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143891,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:09:52.539470   52577 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19450-13013/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-769827 -n multinode-769827
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-769827 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.40s)

                                                
                                    
x
+
TestPreload (274.89s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-651099 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0815 18:14:52.218904   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-651099 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m53.741958176s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-651099 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-651099 image pull gcr.io/k8s-minikube/busybox: (3.195626516s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-651099
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-651099: (7.287538843s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-651099 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0815 18:17:47.733783   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-651099 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m27.58762835s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-651099 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-08-15 18:18:38.414036762 +0000 UTC m=+4409.392141925
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-651099 -n test-preload-651099
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-651099 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-651099 logs -n 25: (1.148321669s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n multinode-769827 sudo cat                                       | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | /home/docker/cp-test_multinode-769827-m03_multinode-769827.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-769827 cp multinode-769827-m03:/home/docker/cp-test.txt                       | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m02:/home/docker/cp-test_multinode-769827-m03_multinode-769827-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n                                                                 | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | multinode-769827-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-769827 ssh -n multinode-769827-m02 sudo cat                                   | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	|         | /home/docker/cp-test_multinode-769827-m03_multinode-769827-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-769827 node stop m03                                                          | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:01 UTC |
	| node    | multinode-769827 node start                                                             | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:01 UTC | 15 Aug 24 18:02 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-769827                                                                | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:02 UTC |                     |
	| stop    | -p multinode-769827                                                                     | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:02 UTC |                     |
	| start   | -p multinode-769827                                                                     | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:04 UTC | 15 Aug 24 18:07 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-769827                                                                | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:07 UTC |                     |
	| node    | multinode-769827 node delete                                                            | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:07 UTC | 15 Aug 24 18:07 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-769827 stop                                                                   | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:07 UTC |                     |
	| start   | -p multinode-769827                                                                     | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:09 UTC | 15 Aug 24 18:13 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-769827                                                                | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:13 UTC |                     |
	| start   | -p multinode-769827-m02                                                                 | multinode-769827-m02 | jenkins | v1.33.1 | 15 Aug 24 18:13 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-769827-m03                                                                 | multinode-769827-m03 | jenkins | v1.33.1 | 15 Aug 24 18:13 UTC | 15 Aug 24 18:14 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-769827                                                                 | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:14 UTC |                     |
	| delete  | -p multinode-769827-m03                                                                 | multinode-769827-m03 | jenkins | v1.33.1 | 15 Aug 24 18:14 UTC | 15 Aug 24 18:14 UTC |
	| delete  | -p multinode-769827                                                                     | multinode-769827     | jenkins | v1.33.1 | 15 Aug 24 18:14 UTC | 15 Aug 24 18:14 UTC |
	| start   | -p test-preload-651099                                                                  | test-preload-651099  | jenkins | v1.33.1 | 15 Aug 24 18:14 UTC | 15 Aug 24 18:17 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-651099 image pull                                                          | test-preload-651099  | jenkins | v1.33.1 | 15 Aug 24 18:17 UTC | 15 Aug 24 18:17 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-651099                                                                  | test-preload-651099  | jenkins | v1.33.1 | 15 Aug 24 18:17 UTC | 15 Aug 24 18:17 UTC |
	| start   | -p test-preload-651099                                                                  | test-preload-651099  | jenkins | v1.33.1 | 15 Aug 24 18:17 UTC | 15 Aug 24 18:18 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-651099 image list                                                          | test-preload-651099  | jenkins | v1.33.1 | 15 Aug 24 18:18 UTC | 15 Aug 24 18:18 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:17:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:17:10.653359   55339 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:17:10.653609   55339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:17:10.653619   55339 out.go:358] Setting ErrFile to fd 2...
	I0815 18:17:10.653625   55339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:17:10.653790   55339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:17:10.654308   55339 out.go:352] Setting JSON to false
	I0815 18:17:10.655187   55339 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7177,"bootTime":1723738654,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:17:10.655248   55339 start.go:139] virtualization: kvm guest
	I0815 18:17:10.657524   55339 out.go:177] * [test-preload-651099] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:17:10.659172   55339 notify.go:220] Checking for updates...
	I0815 18:17:10.659233   55339 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:17:10.660623   55339 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:17:10.661793   55339 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:17:10.663006   55339 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:17:10.663987   55339 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:17:10.665087   55339 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:17:10.666542   55339 config.go:182] Loaded profile config "test-preload-651099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0815 18:17:10.666932   55339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:17:10.666997   55339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:17:10.681269   55339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35655
	I0815 18:17:10.681653   55339 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:17:10.682195   55339 main.go:141] libmachine: Using API Version  1
	I0815 18:17:10.682220   55339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:17:10.682585   55339 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:17:10.682767   55339 main.go:141] libmachine: (test-preload-651099) Calling .DriverName
	I0815 18:17:10.684578   55339 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 18:17:10.685621   55339 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:17:10.685892   55339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:17:10.685923   55339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:17:10.699958   55339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45413
	I0815 18:17:10.700370   55339 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:17:10.700836   55339 main.go:141] libmachine: Using API Version  1
	I0815 18:17:10.700857   55339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:17:10.701153   55339 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:17:10.701308   55339 main.go:141] libmachine: (test-preload-651099) Calling .DriverName
	I0815 18:17:10.734853   55339 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:17:10.736116   55339 start.go:297] selected driver: kvm2
	I0815 18:17:10.736134   55339 start.go:901] validating driver "kvm2" against &{Name:test-preload-651099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-651099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:17:10.736231   55339 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:17:10.736905   55339 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:17:10.736969   55339 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:17:10.751126   55339 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:17:10.751423   55339 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:17:10.751489   55339 cni.go:84] Creating CNI manager for ""
	I0815 18:17:10.751502   55339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:17:10.751546   55339 start.go:340] cluster config:
	{Name:test-preload-651099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-651099 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:17:10.751629   55339 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:17:10.753517   55339 out.go:177] * Starting "test-preload-651099" primary control-plane node in "test-preload-651099" cluster
	I0815 18:17:10.754828   55339 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0815 18:17:10.866860   55339 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0815 18:17:10.866890   55339 cache.go:56] Caching tarball of preloaded images
	I0815 18:17:10.867057   55339 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0815 18:17:10.868955   55339 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0815 18:17:10.870132   55339 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0815 18:17:11.000606   55339 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0815 18:17:23.904956   55339 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0815 18:17:23.905055   55339 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0815 18:17:24.739658   55339 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0815 18:17:24.739806   55339 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099/config.json ...
	I0815 18:17:24.740049   55339 start.go:360] acquireMachinesLock for test-preload-651099: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:17:24.740120   55339 start.go:364] duration metric: took 47.781µs to acquireMachinesLock for "test-preload-651099"
	I0815 18:17:24.740138   55339 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:17:24.740144   55339 fix.go:54] fixHost starting: 
	I0815 18:17:24.740482   55339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:17:24.740532   55339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:17:24.754786   55339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42145
	I0815 18:17:24.755233   55339 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:17:24.755713   55339 main.go:141] libmachine: Using API Version  1
	I0815 18:17:24.755736   55339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:17:24.756012   55339 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:17:24.756191   55339 main.go:141] libmachine: (test-preload-651099) Calling .DriverName
	I0815 18:17:24.756321   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetState
	I0815 18:17:24.757853   55339 fix.go:112] recreateIfNeeded on test-preload-651099: state=Stopped err=<nil>
	I0815 18:17:24.757887   55339 main.go:141] libmachine: (test-preload-651099) Calling .DriverName
	W0815 18:17:24.758025   55339 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:17:24.760081   55339 out.go:177] * Restarting existing kvm2 VM for "test-preload-651099" ...
	I0815 18:17:24.761403   55339 main.go:141] libmachine: (test-preload-651099) Calling .Start
	I0815 18:17:24.761560   55339 main.go:141] libmachine: (test-preload-651099) Ensuring networks are active...
	I0815 18:17:24.762241   55339 main.go:141] libmachine: (test-preload-651099) Ensuring network default is active
	I0815 18:17:24.762516   55339 main.go:141] libmachine: (test-preload-651099) Ensuring network mk-test-preload-651099 is active
	I0815 18:17:24.762841   55339 main.go:141] libmachine: (test-preload-651099) Getting domain xml...
	I0815 18:17:24.763449   55339 main.go:141] libmachine: (test-preload-651099) Creating domain...
	I0815 18:17:25.945850   55339 main.go:141] libmachine: (test-preload-651099) Waiting to get IP...
	I0815 18:17:25.946598   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:25.946973   55339 main.go:141] libmachine: (test-preload-651099) DBG | unable to find current IP address of domain test-preload-651099 in network mk-test-preload-651099
	I0815 18:17:25.947003   55339 main.go:141] libmachine: (test-preload-651099) DBG | I0815 18:17:25.946942   55406 retry.go:31] will retry after 305.217641ms: waiting for machine to come up
	I0815 18:17:26.253386   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:26.253729   55339 main.go:141] libmachine: (test-preload-651099) DBG | unable to find current IP address of domain test-preload-651099 in network mk-test-preload-651099
	I0815 18:17:26.253761   55339 main.go:141] libmachine: (test-preload-651099) DBG | I0815 18:17:26.253694   55406 retry.go:31] will retry after 319.370191ms: waiting for machine to come up
	I0815 18:17:26.574197   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:26.574638   55339 main.go:141] libmachine: (test-preload-651099) DBG | unable to find current IP address of domain test-preload-651099 in network mk-test-preload-651099
	I0815 18:17:26.574669   55339 main.go:141] libmachine: (test-preload-651099) DBG | I0815 18:17:26.574579   55406 retry.go:31] will retry after 378.142435ms: waiting for machine to come up
	I0815 18:17:26.954135   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:26.954618   55339 main.go:141] libmachine: (test-preload-651099) DBG | unable to find current IP address of domain test-preload-651099 in network mk-test-preload-651099
	I0815 18:17:26.954647   55339 main.go:141] libmachine: (test-preload-651099) DBG | I0815 18:17:26.954565   55406 retry.go:31] will retry after 522.958973ms: waiting for machine to come up
	I0815 18:17:27.479323   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:27.479754   55339 main.go:141] libmachine: (test-preload-651099) DBG | unable to find current IP address of domain test-preload-651099 in network mk-test-preload-651099
	I0815 18:17:27.479779   55339 main.go:141] libmachine: (test-preload-651099) DBG | I0815 18:17:27.479702   55406 retry.go:31] will retry after 646.892164ms: waiting for machine to come up
	I0815 18:17:28.128588   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:28.128995   55339 main.go:141] libmachine: (test-preload-651099) DBG | unable to find current IP address of domain test-preload-651099 in network mk-test-preload-651099
	I0815 18:17:28.129018   55339 main.go:141] libmachine: (test-preload-651099) DBG | I0815 18:17:28.128946   55406 retry.go:31] will retry after 625.564046ms: waiting for machine to come up
	I0815 18:17:28.755628   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:28.755967   55339 main.go:141] libmachine: (test-preload-651099) DBG | unable to find current IP address of domain test-preload-651099 in network mk-test-preload-651099
	I0815 18:17:28.755991   55339 main.go:141] libmachine: (test-preload-651099) DBG | I0815 18:17:28.755917   55406 retry.go:31] will retry after 857.93578ms: waiting for machine to come up
	I0815 18:17:29.615793   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:29.616095   55339 main.go:141] libmachine: (test-preload-651099) DBG | unable to find current IP address of domain test-preload-651099 in network mk-test-preload-651099
	I0815 18:17:29.616120   55339 main.go:141] libmachine: (test-preload-651099) DBG | I0815 18:17:29.616064   55406 retry.go:31] will retry after 1.315388551s: waiting for machine to come up
	I0815 18:17:30.932960   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:30.933311   55339 main.go:141] libmachine: (test-preload-651099) DBG | unable to find current IP address of domain test-preload-651099 in network mk-test-preload-651099
	I0815 18:17:30.933340   55339 main.go:141] libmachine: (test-preload-651099) DBG | I0815 18:17:30.933254   55406 retry.go:31] will retry after 1.434289675s: waiting for machine to come up
	I0815 18:17:32.369919   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:32.370239   55339 main.go:141] libmachine: (test-preload-651099) DBG | unable to find current IP address of domain test-preload-651099 in network mk-test-preload-651099
	I0815 18:17:32.370265   55339 main.go:141] libmachine: (test-preload-651099) DBG | I0815 18:17:32.370208   55406 retry.go:31] will retry after 1.881913606s: waiting for machine to come up
	I0815 18:17:34.253464   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:34.253947   55339 main.go:141] libmachine: (test-preload-651099) DBG | unable to find current IP address of domain test-preload-651099 in network mk-test-preload-651099
	I0815 18:17:34.253971   55339 main.go:141] libmachine: (test-preload-651099) DBG | I0815 18:17:34.253888   55406 retry.go:31] will retry after 1.892132374s: waiting for machine to come up
	I0815 18:17:36.148383   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:36.148787   55339 main.go:141] libmachine: (test-preload-651099) DBG | unable to find current IP address of domain test-preload-651099 in network mk-test-preload-651099
	I0815 18:17:36.148817   55339 main.go:141] libmachine: (test-preload-651099) DBG | I0815 18:17:36.148735   55406 retry.go:31] will retry after 2.418417138s: waiting for machine to come up
	I0815 18:17:38.570230   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:38.570586   55339 main.go:141] libmachine: (test-preload-651099) DBG | unable to find current IP address of domain test-preload-651099 in network mk-test-preload-651099
	I0815 18:17:38.570617   55339 main.go:141] libmachine: (test-preload-651099) DBG | I0815 18:17:38.570524   55406 retry.go:31] will retry after 2.974563399s: waiting for machine to come up
	I0815 18:17:41.547753   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:41.548150   55339 main.go:141] libmachine: (test-preload-651099) Found IP for machine: 192.168.39.43
	I0815 18:17:41.548176   55339 main.go:141] libmachine: (test-preload-651099) Reserving static IP address...
	I0815 18:17:41.548193   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has current primary IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:41.548560   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "test-preload-651099", mac: "52:54:00:24:16:b9", ip: "192.168.39.43"} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:41.548582   55339 main.go:141] libmachine: (test-preload-651099) Reserved static IP address: 192.168.39.43
	I0815 18:17:41.548594   55339 main.go:141] libmachine: (test-preload-651099) DBG | skip adding static IP to network mk-test-preload-651099 - found existing host DHCP lease matching {name: "test-preload-651099", mac: "52:54:00:24:16:b9", ip: "192.168.39.43"}
	I0815 18:17:41.548608   55339 main.go:141] libmachine: (test-preload-651099) DBG | Getting to WaitForSSH function...
	I0815 18:17:41.548618   55339 main.go:141] libmachine: (test-preload-651099) Waiting for SSH to be available...
	I0815 18:17:41.550864   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:41.551198   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:41.551240   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:41.551315   55339 main.go:141] libmachine: (test-preload-651099) DBG | Using SSH client type: external
	I0815 18:17:41.551339   55339 main.go:141] libmachine: (test-preload-651099) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/test-preload-651099/id_rsa (-rw-------)
	I0815 18:17:41.551373   55339 main.go:141] libmachine: (test-preload-651099) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.43 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/test-preload-651099/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:17:41.551388   55339 main.go:141] libmachine: (test-preload-651099) DBG | About to run SSH command:
	I0815 18:17:41.551401   55339 main.go:141] libmachine: (test-preload-651099) DBG | exit 0
	I0815 18:17:41.672527   55339 main.go:141] libmachine: (test-preload-651099) DBG | SSH cmd err, output: <nil>: 
	I0815 18:17:41.672848   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetConfigRaw
	I0815 18:17:41.673491   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetIP
	I0815 18:17:41.675593   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:41.675908   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:41.675938   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:41.676139   55339 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099/config.json ...
	I0815 18:17:41.676291   55339 machine.go:93] provisionDockerMachine start ...
	I0815 18:17:41.676308   55339 main.go:141] libmachine: (test-preload-651099) Calling .DriverName
	I0815 18:17:41.676522   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHHostname
	I0815 18:17:41.678512   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:41.678879   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:41.678899   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:41.679061   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHPort
	I0815 18:17:41.679234   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:41.679387   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:41.679542   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHUsername
	I0815 18:17:41.679676   55339 main.go:141] libmachine: Using SSH client type: native
	I0815 18:17:41.679844   55339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0815 18:17:41.679855   55339 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:17:41.780721   55339 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:17:41.780754   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetMachineName
	I0815 18:17:41.780974   55339 buildroot.go:166] provisioning hostname "test-preload-651099"
	I0815 18:17:41.780996   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetMachineName
	I0815 18:17:41.781263   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHHostname
	I0815 18:17:41.783695   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:41.783998   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:41.784022   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:41.784190   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHPort
	I0815 18:17:41.784386   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:41.784545   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:41.784675   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHUsername
	I0815 18:17:41.784895   55339 main.go:141] libmachine: Using SSH client type: native
	I0815 18:17:41.785101   55339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0815 18:17:41.785119   55339 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-651099 && echo "test-preload-651099" | sudo tee /etc/hostname
	I0815 18:17:41.903184   55339 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-651099
	
	I0815 18:17:41.903206   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHHostname
	I0815 18:17:41.906111   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:41.906455   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:41.906478   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:41.906652   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHPort
	I0815 18:17:41.906838   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:41.907025   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:41.907242   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHUsername
	I0815 18:17:41.907418   55339 main.go:141] libmachine: Using SSH client type: native
	I0815 18:17:41.907578   55339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0815 18:17:41.907593   55339 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-651099' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-651099/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-651099' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:17:42.019518   55339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:17:42.019549   55339 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:17:42.019565   55339 buildroot.go:174] setting up certificates
	I0815 18:17:42.019575   55339 provision.go:84] configureAuth start
	I0815 18:17:42.019583   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetMachineName
	I0815 18:17:42.019867   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetIP
	I0815 18:17:42.022588   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.022876   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:42.022897   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.023026   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHHostname
	I0815 18:17:42.025183   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.025511   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:42.025536   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.025638   55339 provision.go:143] copyHostCerts
	I0815 18:17:42.025683   55339 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:17:42.025700   55339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:17:42.025762   55339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:17:42.025847   55339 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:17:42.025855   55339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:17:42.025877   55339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:17:42.025928   55339 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:17:42.025935   55339 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:17:42.025954   55339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:17:42.026000   55339 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.test-preload-651099 san=[127.0.0.1 192.168.39.43 localhost minikube test-preload-651099]
	I0815 18:17:42.201543   55339 provision.go:177] copyRemoteCerts
	I0815 18:17:42.201593   55339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:17:42.201616   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHHostname
	I0815 18:17:42.204192   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.204473   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:42.204525   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.204715   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHPort
	I0815 18:17:42.204931   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:42.205125   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHUsername
	I0815 18:17:42.205286   55339 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/test-preload-651099/id_rsa Username:docker}
	I0815 18:17:42.286701   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:17:42.310108   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:17:42.333868   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0815 18:17:42.357104   55339 provision.go:87] duration metric: took 337.517465ms to configureAuth
	I0815 18:17:42.357139   55339 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:17:42.357335   55339 config.go:182] Loaded profile config "test-preload-651099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0815 18:17:42.357417   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHHostname
	I0815 18:17:42.359848   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.360108   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:42.360127   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.360320   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHPort
	I0815 18:17:42.360514   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:42.360656   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:42.360764   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHUsername
	I0815 18:17:42.360878   55339 main.go:141] libmachine: Using SSH client type: native
	I0815 18:17:42.361023   55339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0815 18:17:42.361037   55339 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:17:42.664229   55339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:17:42.664259   55339 machine.go:96] duration metric: took 987.955459ms to provisionDockerMachine
	I0815 18:17:42.664273   55339 start.go:293] postStartSetup for "test-preload-651099" (driver="kvm2")
	I0815 18:17:42.664286   55339 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:17:42.664309   55339 main.go:141] libmachine: (test-preload-651099) Calling .DriverName
	I0815 18:17:42.664601   55339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:17:42.664631   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHHostname
	I0815 18:17:42.667455   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.667737   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:42.667758   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.667874   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHPort
	I0815 18:17:42.668030   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:42.668199   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHUsername
	I0815 18:17:42.668328   55339 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/test-preload-651099/id_rsa Username:docker}
	I0815 18:17:42.752664   55339 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:17:42.757482   55339 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:17:42.757511   55339 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:17:42.757569   55339 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:17:42.757656   55339 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:17:42.757767   55339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:17:42.767659   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:17:42.798106   55339 start.go:296] duration metric: took 133.817838ms for postStartSetup
	I0815 18:17:42.798164   55339 fix.go:56] duration metric: took 18.058020095s for fixHost
	I0815 18:17:42.798192   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHHostname
	I0815 18:17:42.801262   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.801716   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:42.801740   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.801920   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHPort
	I0815 18:17:42.802113   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:42.802287   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:42.802450   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHUsername
	I0815 18:17:42.802610   55339 main.go:141] libmachine: Using SSH client type: native
	I0815 18:17:42.802805   55339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.43 22 <nil> <nil>}
	I0815 18:17:42.802818   55339 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:17:42.913340   55339 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723745862.873522875
	
	I0815 18:17:42.913370   55339 fix.go:216] guest clock: 1723745862.873522875
	I0815 18:17:42.913381   55339 fix.go:229] Guest: 2024-08-15 18:17:42.873522875 +0000 UTC Remote: 2024-08-15 18:17:42.798169727 +0000 UTC m=+32.179019453 (delta=75.353148ms)
	I0815 18:17:42.913426   55339 fix.go:200] guest clock delta is within tolerance: 75.353148ms
	I0815 18:17:42.913433   55339 start.go:83] releasing machines lock for "test-preload-651099", held for 18.173300567s
	I0815 18:17:42.913454   55339 main.go:141] libmachine: (test-preload-651099) Calling .DriverName
	I0815 18:17:42.913719   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetIP
	I0815 18:17:42.916926   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.917314   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:42.917344   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.917534   55339 main.go:141] libmachine: (test-preload-651099) Calling .DriverName
	I0815 18:17:42.918095   55339 main.go:141] libmachine: (test-preload-651099) Calling .DriverName
	I0815 18:17:42.918270   55339 main.go:141] libmachine: (test-preload-651099) Calling .DriverName
	I0815 18:17:42.918376   55339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:17:42.918422   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHHostname
	I0815 18:17:42.918533   55339 ssh_runner.go:195] Run: cat /version.json
	I0815 18:17:42.918560   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHHostname
	I0815 18:17:42.920974   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.921344   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.921396   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:42.921422   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.921577   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHPort
	I0815 18:17:42.921734   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:42.921785   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:42.921808   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:42.921912   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHUsername
	I0815 18:17:42.921972   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHPort
	I0815 18:17:42.922048   55339 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/test-preload-651099/id_rsa Username:docker}
	I0815 18:17:42.922166   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:17:42.922311   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHUsername
	I0815 18:17:42.922474   55339 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/test-preload-651099/id_rsa Username:docker}
	I0815 18:17:42.997888   55339 ssh_runner.go:195] Run: systemctl --version
	I0815 18:17:43.023132   55339 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:17:43.166640   55339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:17:43.172959   55339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:17:43.173023   55339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:17:43.193470   55339 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:17:43.193491   55339 start.go:495] detecting cgroup driver to use...
	I0815 18:17:43.193543   55339 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:17:43.209469   55339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:17:43.223358   55339 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:17:43.223423   55339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:17:43.237019   55339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:17:43.250751   55339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:17:43.374613   55339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:17:43.542611   55339 docker.go:233] disabling docker service ...
	I0815 18:17:43.542684   55339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:17:43.556979   55339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:17:43.569576   55339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:17:43.693210   55339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:17:43.815070   55339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:17:43.829264   55339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:17:43.849815   55339 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0815 18:17:43.849881   55339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:17:43.862350   55339 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:17:43.862420   55339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:17:43.875000   55339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:17:43.885811   55339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:17:43.896619   55339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:17:43.907435   55339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:17:43.917715   55339 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:17:43.938373   55339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:17:43.948846   55339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:17:43.958010   55339 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:17:43.958062   55339 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:17:43.971381   55339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:17:43.980750   55339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:17:44.098065   55339 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:17:44.228526   55339 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:17:44.228601   55339 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:17:44.233640   55339 start.go:563] Will wait 60s for crictl version
	I0815 18:17:44.233695   55339 ssh_runner.go:195] Run: which crictl
	I0815 18:17:44.237584   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:17:44.283795   55339 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:17:44.283886   55339 ssh_runner.go:195] Run: crio --version
	I0815 18:17:44.313528   55339 ssh_runner.go:195] Run: crio --version
	I0815 18:17:44.344063   55339 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0815 18:17:44.345308   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetIP
	I0815 18:17:44.347901   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:44.348209   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:17:44.348235   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:17:44.348458   55339 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:17:44.352433   55339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:17:44.365167   55339 kubeadm.go:883] updating cluster {Name:test-preload-651099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-651099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:17:44.365303   55339 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0815 18:17:44.365364   55339 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:17:44.400810   55339 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0815 18:17:44.400880   55339 ssh_runner.go:195] Run: which lz4
	I0815 18:17:44.404903   55339 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:17:44.409122   55339 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:17:44.409145   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0815 18:17:45.966687   55339 crio.go:462] duration metric: took 1.561821109s to copy over tarball
	I0815 18:17:45.966755   55339 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:17:48.296269   55339 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.329490286s)
	I0815 18:17:48.296301   55339 crio.go:469] duration metric: took 2.329584888s to extract the tarball
	I0815 18:17:48.296311   55339 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:17:48.337094   55339 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:17:48.378600   55339 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0815 18:17:48.378624   55339 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:17:48.378697   55339 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:17:48.378713   55339 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0815 18:17:48.378728   55339 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0815 18:17:48.378753   55339 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0815 18:17:48.378754   55339 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:17:48.378705   55339 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0815 18:17:48.378778   55339 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0815 18:17:48.378850   55339 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0815 18:17:48.380036   55339 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:17:48.380096   55339 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0815 18:17:48.380116   55339 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:17:48.380127   55339 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0815 18:17:48.380121   55339 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0815 18:17:48.380124   55339 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0815 18:17:48.380190   55339 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0815 18:17:48.380196   55339 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0815 18:17:48.556370   55339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0815 18:17:48.587341   55339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0815 18:17:48.602728   55339 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0815 18:17:48.602767   55339 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0815 18:17:48.602801   55339 ssh_runner.go:195] Run: which crictl
	I0815 18:17:48.637321   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0815 18:17:48.637359   55339 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0815 18:17:48.637401   55339 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0815 18:17:48.637440   55339 ssh_runner.go:195] Run: which crictl
	I0815 18:17:48.672546   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0815 18:17:48.672757   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0815 18:17:48.717540   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0815 18:17:48.717695   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0815 18:17:48.723172   55339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0815 18:17:48.723657   55339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0815 18:17:48.737687   55339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0815 18:17:48.744154   55339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:17:48.772704   55339 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0815 18:17:48.772815   55339 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0815 18:17:48.801821   55339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0815 18:17:48.823757   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0815 18:17:48.901979   55339 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0815 18:17:48.902019   55339 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0815 18:17:48.902027   55339 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0815 18:17:48.902047   55339 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0815 18:17:48.902051   55339 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0815 18:17:48.902075   55339 ssh_runner.go:195] Run: which crictl
	I0815 18:17:48.902098   55339 ssh_runner.go:195] Run: which crictl
	I0815 18:17:48.902076   55339 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0815 18:17:48.902161   55339 ssh_runner.go:195] Run: which crictl
	I0815 18:17:48.921363   55339 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0815 18:17:48.921394   55339 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:17:48.921433   55339 ssh_runner.go:195] Run: which crictl
	I0815 18:17:48.921435   55339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0815 18:17:48.921450   55339 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0815 18:17:48.921491   55339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0815 18:17:48.921508   55339 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0815 18:17:48.921538   55339 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0815 18:17:48.921585   55339 ssh_runner.go:195] Run: which crictl
	I0815 18:17:48.935306   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0815 18:17:48.935341   55339 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0815 18:17:48.935432   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0815 18:17:48.935443   55339 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0815 18:17:48.935454   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0815 18:17:48.935509   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:17:49.228222   55339 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:17:51.854708   55339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4: (2.933183013s)
	I0815 18:17:51.854743   55339 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0815 18:17:51.854768   55339 ssh_runner.go:235] Completed: which crictl: (2.933151269s)
	I0815 18:17:51.854806   55339 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (2.919479088s)
	I0815 18:17:51.854830   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0815 18:17:51.854866   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0815 18:17:51.854895   55339 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4: (2.919414043s)
	I0815 18:17:51.854944   55339 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4: (2.919485486s)
	I0815 18:17:51.854968   55339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0815 18:17:51.854976   55339 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0815 18:17:51.854997   55339 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7: (2.919548909s)
	I0815 18:17:51.855038   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0815 18:17:51.855047   55339 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6: (2.919522384s)
	I0815 18:17:51.855003   55339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0815 18:17:51.855087   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:17:51.854950   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0815 18:17:51.855141   55339 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.626895932s)
	I0815 18:17:51.994543   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0815 18:17:51.994604   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0815 18:17:51.996406   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0815 18:17:52.417819   55339 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0815 18:17:52.417919   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0815 18:17:52.417978   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:17:52.418018   55339 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0815 18:17:52.418100   55339 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0815 18:17:52.418135   55339 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0815 18:17:52.418103   55339 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0815 18:17:52.418213   55339 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0815 18:17:52.477626   55339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0815 18:17:52.477654   55339 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0815 18:17:52.477676   55339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0815 18:17:52.477705   55339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0815 18:17:52.477762   55339 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0815 18:17:52.477856   55339 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0815 18:17:52.486031   55339 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0815 18:17:52.486082   55339 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0815 18:17:52.486108   55339 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0815 18:17:52.486164   55339 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0815 18:17:52.628832   55339 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0815 18:17:52.628876   55339 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0815 18:17:52.628922   55339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0815 18:17:52.628936   55339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0815 18:17:52.628972   55339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0815 18:17:52.629004   55339 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0815 18:17:53.372050   55339 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0815 18:17:53.372090   55339 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0815 18:17:53.372145   55339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0815 18:17:55.627034   55339 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.254869927s)
	I0815 18:17:55.627060   55339 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0815 18:17:55.627082   55339 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0815 18:17:55.627118   55339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0815 18:17:56.269055   55339 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0815 18:17:56.269090   55339 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0815 18:17:56.269139   55339 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0815 18:17:56.608966   55339 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0815 18:17:56.609018   55339 cache_images.go:123] Successfully loaded all cached images
	I0815 18:17:56.609027   55339 cache_images.go:92] duration metric: took 8.230389154s to LoadCachedImages
	I0815 18:17:56.609044   55339 kubeadm.go:934] updating node { 192.168.39.43 8443 v1.24.4 crio true true} ...
	I0815 18:17:56.609196   55339 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-651099 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.43
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-651099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:17:56.609309   55339 ssh_runner.go:195] Run: crio config
	I0815 18:17:56.658414   55339 cni.go:84] Creating CNI manager for ""
	I0815 18:17:56.658435   55339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:17:56.658448   55339 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:17:56.658466   55339 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.43 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-651099 NodeName:test-preload-651099 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.43"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.43 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:17:56.658628   55339 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.43
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-651099"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.43
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.43"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:17:56.658700   55339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0815 18:17:56.668912   55339 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:17:56.668971   55339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:17:56.678450   55339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0815 18:17:56.694099   55339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:17:56.709665   55339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0815 18:17:56.725890   55339 ssh_runner.go:195] Run: grep 192.168.39.43	control-plane.minikube.internal$ /etc/hosts
	I0815 18:17:56.729624   55339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.43	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:17:56.741556   55339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:17:56.863631   55339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:17:56.881098   55339 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099 for IP: 192.168.39.43
	I0815 18:17:56.881118   55339 certs.go:194] generating shared ca certs ...
	I0815 18:17:56.881132   55339 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:17:56.881281   55339 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:17:56.881317   55339 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:17:56.881339   55339 certs.go:256] generating profile certs ...
	I0815 18:17:56.881416   55339 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099/client.key
	I0815 18:17:56.881480   55339 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099/apiserver.key.c0517a46
	I0815 18:17:56.881522   55339 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099/proxy-client.key
	I0815 18:17:56.881637   55339 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:17:56.881664   55339 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:17:56.881671   55339 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:17:56.881701   55339 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:17:56.881731   55339 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:17:56.881757   55339 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:17:56.881806   55339 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:17:56.882672   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:17:56.914770   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:17:56.945182   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:17:56.976505   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:17:57.002975   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 18:17:57.039772   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 18:17:57.077781   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:17:57.101102   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:17:57.123969   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:17:57.146267   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:17:57.168851   55339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:17:57.191839   55339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:17:57.208248   55339 ssh_runner.go:195] Run: openssl version
	I0815 18:17:57.213945   55339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:17:57.224974   55339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:17:57.229414   55339 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:17:57.229470   55339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:17:57.235185   55339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:17:57.245883   55339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:17:57.256263   55339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:17:57.260538   55339 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:17:57.260587   55339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:17:57.266153   55339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:17:57.277108   55339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:17:57.287977   55339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:17:57.292166   55339 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:17:57.292202   55339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:17:57.297723   55339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:17:57.308264   55339 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:17:57.312584   55339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:17:57.318479   55339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:17:57.324225   55339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:17:57.330205   55339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:17:57.335925   55339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:17:57.341756   55339 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:17:57.347347   55339 kubeadm.go:392] StartCluster: {Name:test-preload-651099 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-651099 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:17:57.347466   55339 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:17:57.347512   55339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:17:57.384464   55339 cri.go:89] found id: ""
	I0815 18:17:57.384568   55339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:17:57.395108   55339 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:17:57.395133   55339 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:17:57.395184   55339 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:17:57.405061   55339 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:17:57.405475   55339 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-651099" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:17:57.405578   55339 kubeconfig.go:62] /home/jenkins/minikube-integration/19450-13013/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-651099" cluster setting kubeconfig missing "test-preload-651099" context setting]
	I0815 18:17:57.405815   55339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:17:57.406371   55339 kapi.go:59] client config for test-preload-651099: &rest.Config{Host:"https://192.168.39.43:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099/client.crt", KeyFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099/client.key", CAFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 18:17:57.406945   55339 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:17:57.416471   55339 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.43
	I0815 18:17:57.416516   55339 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:17:57.416528   55339 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:17:57.416564   55339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:17:57.453455   55339 cri.go:89] found id: ""
	I0815 18:17:57.453517   55339 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:17:57.470410   55339 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:17:57.480342   55339 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:17:57.480370   55339 kubeadm.go:157] found existing configuration files:
	
	I0815 18:17:57.480437   55339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:17:57.489711   55339 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:17:57.489792   55339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:17:57.499449   55339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:17:57.508574   55339 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:17:57.508616   55339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:17:57.517889   55339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:17:57.526709   55339 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:17:57.526743   55339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:17:57.535896   55339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:17:57.544791   55339 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:17:57.544843   55339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:17:57.554084   55339 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:17:57.563718   55339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:17:57.649613   55339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:17:58.537776   55339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:17:58.782868   55339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:17:58.844669   55339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:17:58.914698   55339 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:17:58.914785   55339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:17:59.415636   55339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:17:59.915119   55339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:17:59.930173   55339 api_server.go:72] duration metric: took 1.015478956s to wait for apiserver process to appear ...
	I0815 18:17:59.930199   55339 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:17:59.930217   55339 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0815 18:17:59.930698   55339 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": dial tcp 192.168.39.43:8443: connect: connection refused
	I0815 18:18:00.430294   55339 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0815 18:18:05.431686   55339 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 18:18:05.431738   55339 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0815 18:18:10.432189   55339 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 18:18:10.432262   55339 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0815 18:18:15.432716   55339 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 18:18:15.432756   55339 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0815 18:18:20.433104   55339 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0815 18:18:20.433148   55339 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0815 18:18:20.781972   55339 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": read tcp 192.168.39.1:42962->192.168.39.43:8443: read: connection reset by peer
	I0815 18:18:20.931249   55339 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0815 18:18:20.931926   55339 api_server.go:269] stopped: https://192.168.39.43:8443/healthz: Get "https://192.168.39.43:8443/healthz": dial tcp 192.168.39.43:8443: connect: connection refused
	I0815 18:18:21.430384   55339 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0815 18:18:24.085773   55339 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:18:24.085806   55339 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:18:24.085824   55339 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0815 18:18:24.130739   55339 api_server.go:279] https://192.168.39.43:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:18:24.130764   55339 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:18:24.431206   55339 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0815 18:18:24.436672   55339 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:18:24.436701   55339 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:18:24.930767   55339 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0815 18:18:24.937432   55339 api_server.go:279] https://192.168.39.43:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:18:24.937471   55339 api_server.go:103] status: https://192.168.39.43:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:18:25.431050   55339 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0815 18:18:25.437241   55339 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0815 18:18:25.445311   55339 api_server.go:141] control plane version: v1.24.4
	I0815 18:18:25.445332   55339 api_server.go:131] duration metric: took 25.515127009s to wait for apiserver health ...
	I0815 18:18:25.445340   55339 cni.go:84] Creating CNI manager for ""
	I0815 18:18:25.445346   55339 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:18:25.447215   55339 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:18:25.448391   55339 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:18:25.460803   55339 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:18:25.479650   55339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:18:25.479710   55339 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0815 18:18:25.479724   55339 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0815 18:18:25.506078   55339 system_pods.go:59] 8 kube-system pods found
	I0815 18:18:25.506103   55339 system_pods.go:61] "coredns-6d4b75cb6d-d5g7k" [82a41408-dd1d-4963-a1cb-c6c98fdb10f6] Running
	I0815 18:18:25.506111   55339 system_pods.go:61] "coredns-6d4b75cb6d-v9w4v" [afa432d7-b799-483c-b5d7-076d7d969134] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:18:25.506116   55339 system_pods.go:61] "etcd-test-preload-651099" [e5a63b34-b0c0-4b6c-a21c-f9a57643070c] Running
	I0815 18:18:25.506120   55339 system_pods.go:61] "kube-apiserver-test-preload-651099" [5172debd-1904-4c97-a775-e609ccf35af7] Running
	I0815 18:18:25.506124   55339 system_pods.go:61] "kube-controller-manager-test-preload-651099" [71965642-3896-4d4d-86d9-7eef47c9b494] Running
	I0815 18:18:25.506128   55339 system_pods.go:61] "kube-proxy-l5vhv" [d67a1cc4-0c12-4767-a5d7-2fa970b89f60] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 18:18:25.506135   55339 system_pods.go:61] "kube-scheduler-test-preload-651099" [f3fc236b-d226-4ef3-8831-18843b067ea7] Running
	I0815 18:18:25.506143   55339 system_pods.go:61] "storage-provisioner" [4b518922-07af-44ce-9e4a-5d7d60c842d7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 18:18:25.506149   55339 system_pods.go:74] duration metric: took 26.482103ms to wait for pod list to return data ...
	I0815 18:18:25.506155   55339 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:18:25.514637   55339 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:18:25.514660   55339 node_conditions.go:123] node cpu capacity is 2
	I0815 18:18:25.514668   55339 node_conditions.go:105] duration metric: took 8.501791ms to run NodePressure ...
	I0815 18:18:25.514684   55339 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:18:25.758897   55339 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:18:25.765183   55339 retry.go:31] will retry after 343.982964ms: kubelet not initialised
	I0815 18:18:26.122009   55339 retry.go:31] will retry after 359.92272ms: kubelet not initialised
	I0815 18:18:26.486997   55339 retry.go:31] will retry after 508.76748ms: kubelet not initialised
	I0815 18:18:27.001404   55339 kubeadm.go:739] kubelet initialised
	I0815 18:18:27.001435   55339 kubeadm.go:740] duration metric: took 1.242515252s waiting for restarted kubelet to initialise ...
	I0815 18:18:27.001446   55339 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:18:27.005978   55339 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-v9w4v" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:27.011228   55339 pod_ready.go:98] node "test-preload-651099" hosting pod "coredns-6d4b75cb6d-v9w4v" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:27.011252   55339 pod_ready.go:82] duration metric: took 5.250436ms for pod "coredns-6d4b75cb6d-v9w4v" in "kube-system" namespace to be "Ready" ...
	E0815 18:18:27.011262   55339 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-651099" hosting pod "coredns-6d4b75cb6d-v9w4v" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:27.011271   55339 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:27.015413   55339 pod_ready.go:98] node "test-preload-651099" hosting pod "etcd-test-preload-651099" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:27.015437   55339 pod_ready.go:82] duration metric: took 4.154507ms for pod "etcd-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	E0815 18:18:27.015445   55339 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-651099" hosting pod "etcd-test-preload-651099" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:27.015451   55339 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:27.020376   55339 pod_ready.go:98] node "test-preload-651099" hosting pod "kube-apiserver-test-preload-651099" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:27.020392   55339 pod_ready.go:82] duration metric: took 4.935126ms for pod "kube-apiserver-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	E0815 18:18:27.020399   55339 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-651099" hosting pod "kube-apiserver-test-preload-651099" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:27.020406   55339 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:27.024505   55339 pod_ready.go:98] node "test-preload-651099" hosting pod "kube-controller-manager-test-preload-651099" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:27.024543   55339 pod_ready.go:82] duration metric: took 4.128171ms for pod "kube-controller-manager-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	E0815 18:18:27.024554   55339 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-651099" hosting pod "kube-controller-manager-test-preload-651099" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:27.024562   55339 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-l5vhv" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:27.400314   55339 pod_ready.go:98] node "test-preload-651099" hosting pod "kube-proxy-l5vhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:27.400346   55339 pod_ready.go:82] duration metric: took 375.772478ms for pod "kube-proxy-l5vhv" in "kube-system" namespace to be "Ready" ...
	E0815 18:18:27.400356   55339 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-651099" hosting pod "kube-proxy-l5vhv" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:27.400362   55339 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:27.800192   55339 pod_ready.go:98] node "test-preload-651099" hosting pod "kube-scheduler-test-preload-651099" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:27.800216   55339 pod_ready.go:82] duration metric: took 399.847373ms for pod "kube-scheduler-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	E0815 18:18:27.800225   55339 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-651099" hosting pod "kube-scheduler-test-preload-651099" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:27.800231   55339 pod_ready.go:39] duration metric: took 798.776199ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:18:27.800252   55339 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:18:27.813203   55339 ops.go:34] apiserver oom_adj: -16
	I0815 18:18:27.813220   55339 kubeadm.go:597] duration metric: took 30.418080391s to restartPrimaryControlPlane
	I0815 18:18:27.813227   55339 kubeadm.go:394] duration metric: took 30.465886748s to StartCluster
	I0815 18:18:27.813247   55339 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:18:27.813320   55339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:18:27.813924   55339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:18:27.814183   55339 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.43 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:18:27.814264   55339 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:18:27.814338   55339 addons.go:69] Setting storage-provisioner=true in profile "test-preload-651099"
	I0815 18:18:27.814349   55339 addons.go:69] Setting default-storageclass=true in profile "test-preload-651099"
	I0815 18:18:27.814369   55339 addons.go:234] Setting addon storage-provisioner=true in "test-preload-651099"
	W0815 18:18:27.814381   55339 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:18:27.814391   55339 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-651099"
	I0815 18:18:27.814418   55339 host.go:66] Checking if "test-preload-651099" exists ...
	I0815 18:18:27.814418   55339 config.go:182] Loaded profile config "test-preload-651099": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0815 18:18:27.814745   55339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:18:27.814801   55339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:18:27.814805   55339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:18:27.814840   55339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:18:27.815960   55339 out.go:177] * Verifying Kubernetes components...
	I0815 18:18:27.817268   55339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:18:27.829827   55339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44203
	I0815 18:18:27.829849   55339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35383
	I0815 18:18:27.830249   55339 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:18:27.830291   55339 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:18:27.830768   55339 main.go:141] libmachine: Using API Version  1
	I0815 18:18:27.830793   55339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:18:27.830886   55339 main.go:141] libmachine: Using API Version  1
	I0815 18:18:27.830902   55339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:18:27.831118   55339 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:18:27.831232   55339 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:18:27.831316   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetState
	I0815 18:18:27.831782   55339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:18:27.831822   55339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:18:27.833896   55339 kapi.go:59] client config for test-preload-651099: &rest.Config{Host:"https://192.168.39.43:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099/client.crt", KeyFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/test-preload-651099/client.key", CAFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 18:18:27.834214   55339 addons.go:234] Setting addon default-storageclass=true in "test-preload-651099"
	W0815 18:18:27.834234   55339 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:18:27.834270   55339 host.go:66] Checking if "test-preload-651099" exists ...
	I0815 18:18:27.834652   55339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:18:27.834696   55339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:18:27.846931   55339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42509
	I0815 18:18:27.847502   55339 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:18:27.848012   55339 main.go:141] libmachine: Using API Version  1
	I0815 18:18:27.848041   55339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:18:27.848407   55339 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:18:27.848579   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetState
	I0815 18:18:27.849245   55339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40361
	I0815 18:18:27.849644   55339 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:18:27.850043   55339 main.go:141] libmachine: Using API Version  1
	I0815 18:18:27.850066   55339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:18:27.850156   55339 main.go:141] libmachine: (test-preload-651099) Calling .DriverName
	I0815 18:18:27.850370   55339 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:18:27.850796   55339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:18:27.850830   55339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:18:27.852079   55339 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:18:27.853456   55339 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:18:27.853472   55339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:18:27.853485   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHHostname
	I0815 18:18:27.856504   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:18:27.856942   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:18:27.856959   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:18:27.857209   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHPort
	I0815 18:18:27.857385   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:18:27.857551   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHUsername
	I0815 18:18:27.857687   55339 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/test-preload-651099/id_rsa Username:docker}
	I0815 18:18:27.865773   55339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36203
	I0815 18:18:27.866104   55339 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:18:27.866516   55339 main.go:141] libmachine: Using API Version  1
	I0815 18:18:27.866540   55339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:18:27.866822   55339 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:18:27.866992   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetState
	I0815 18:18:27.868440   55339 main.go:141] libmachine: (test-preload-651099) Calling .DriverName
	I0815 18:18:27.868678   55339 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:18:27.868692   55339 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:18:27.868710   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHHostname
	I0815 18:18:27.871175   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:18:27.871543   55339 main.go:141] libmachine: (test-preload-651099) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:16:b9", ip: ""} in network mk-test-preload-651099: {Iface:virbr1 ExpiryTime:2024-08-15 19:17:35 +0000 UTC Type:0 Mac:52:54:00:24:16:b9 Iaid: IPaddr:192.168.39.43 Prefix:24 Hostname:test-preload-651099 Clientid:01:52:54:00:24:16:b9}
	I0815 18:18:27.871568   55339 main.go:141] libmachine: (test-preload-651099) DBG | domain test-preload-651099 has defined IP address 192.168.39.43 and MAC address 52:54:00:24:16:b9 in network mk-test-preload-651099
	I0815 18:18:27.871709   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHPort
	I0815 18:18:27.871877   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHKeyPath
	I0815 18:18:27.872012   55339 main.go:141] libmachine: (test-preload-651099) Calling .GetSSHUsername
	I0815 18:18:27.872123   55339 sshutil.go:53] new ssh client: &{IP:192.168.39.43 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/test-preload-651099/id_rsa Username:docker}
	I0815 18:18:27.996292   55339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:18:28.018082   55339 node_ready.go:35] waiting up to 6m0s for node "test-preload-651099" to be "Ready" ...
	I0815 18:18:28.075914   55339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:18:28.185342   55339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:18:29.102600   55339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.026643733s)
	I0815 18:18:29.102653   55339 main.go:141] libmachine: Making call to close driver server
	I0815 18:18:29.102667   55339 main.go:141] libmachine: (test-preload-651099) Calling .Close
	I0815 18:18:29.102939   55339 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:18:29.102981   55339 main.go:141] libmachine: (test-preload-651099) DBG | Closing plugin on server side
	I0815 18:18:29.102999   55339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:18:29.103009   55339 main.go:141] libmachine: Making call to close driver server
	I0815 18:18:29.103025   55339 main.go:141] libmachine: (test-preload-651099) Calling .Close
	I0815 18:18:29.103279   55339 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:18:29.103305   55339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:18:29.103307   55339 main.go:141] libmachine: (test-preload-651099) DBG | Closing plugin on server side
	I0815 18:18:29.109233   55339 main.go:141] libmachine: Making call to close driver server
	I0815 18:18:29.109254   55339 main.go:141] libmachine: (test-preload-651099) Calling .Close
	I0815 18:18:29.109478   55339 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:18:29.109494   55339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:18:29.128915   55339 main.go:141] libmachine: Making call to close driver server
	I0815 18:18:29.128938   55339 main.go:141] libmachine: (test-preload-651099) Calling .Close
	I0815 18:18:29.129148   55339 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:18:29.129170   55339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:18:29.129183   55339 main.go:141] libmachine: Making call to close driver server
	I0815 18:18:29.129190   55339 main.go:141] libmachine: (test-preload-651099) Calling .Close
	I0815 18:18:29.129382   55339 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:18:29.129399   55339 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:18:29.131322   55339 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0815 18:18:29.132532   55339 addons.go:510] duration metric: took 1.318272141s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0815 18:18:30.022302   55339 node_ready.go:53] node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:32.023465   55339 node_ready.go:53] node "test-preload-651099" has status "Ready":"False"
	I0815 18:18:34.527974   55339 node_ready.go:49] node "test-preload-651099" has status "Ready":"True"
	I0815 18:18:34.527998   55339 node_ready.go:38] duration metric: took 6.509887897s for node "test-preload-651099" to be "Ready" ...
	I0815 18:18:34.528018   55339 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:18:34.534962   55339 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-v9w4v" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:34.540565   55339 pod_ready.go:93] pod "coredns-6d4b75cb6d-v9w4v" in "kube-system" namespace has status "Ready":"True"
	I0815 18:18:34.540582   55339 pod_ready.go:82] duration metric: took 5.59782ms for pod "coredns-6d4b75cb6d-v9w4v" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:34.540591   55339 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:34.546405   55339 pod_ready.go:93] pod "etcd-test-preload-651099" in "kube-system" namespace has status "Ready":"True"
	I0815 18:18:34.546423   55339 pod_ready.go:82] duration metric: took 5.823456ms for pod "etcd-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:34.546433   55339 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:34.551134   55339 pod_ready.go:93] pod "kube-apiserver-test-preload-651099" in "kube-system" namespace has status "Ready":"True"
	I0815 18:18:34.551151   55339 pod_ready.go:82] duration metric: took 4.711533ms for pod "kube-apiserver-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:34.551160   55339 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:36.557622   55339 pod_ready.go:103] pod "kube-controller-manager-test-preload-651099" in "kube-system" namespace has status "Ready":"False"
	I0815 18:18:37.057236   55339 pod_ready.go:93] pod "kube-controller-manager-test-preload-651099" in "kube-system" namespace has status "Ready":"True"
	I0815 18:18:37.057258   55339 pod_ready.go:82] duration metric: took 2.506092334s for pod "kube-controller-manager-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:37.057268   55339 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-l5vhv" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:37.062094   55339 pod_ready.go:93] pod "kube-proxy-l5vhv" in "kube-system" namespace has status "Ready":"True"
	I0815 18:18:37.062112   55339 pod_ready.go:82] duration metric: took 4.839458ms for pod "kube-proxy-l5vhv" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:37.062121   55339 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:37.322663   55339 pod_ready.go:93] pod "kube-scheduler-test-preload-651099" in "kube-system" namespace has status "Ready":"True"
	I0815 18:18:37.322686   55339 pod_ready.go:82] duration metric: took 260.558422ms for pod "kube-scheduler-test-preload-651099" in "kube-system" namespace to be "Ready" ...
	I0815 18:18:37.322700   55339 pod_ready.go:39] duration metric: took 2.794669844s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:18:37.322716   55339 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:18:37.322771   55339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:18:37.338459   55339 api_server.go:72] duration metric: took 9.524244436s to wait for apiserver process to appear ...
	I0815 18:18:37.338482   55339 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:18:37.338509   55339 api_server.go:253] Checking apiserver healthz at https://192.168.39.43:8443/healthz ...
	I0815 18:18:37.343315   55339 api_server.go:279] https://192.168.39.43:8443/healthz returned 200:
	ok
	I0815 18:18:37.344461   55339 api_server.go:141] control plane version: v1.24.4
	I0815 18:18:37.344480   55339 api_server.go:131] duration metric: took 5.991205ms to wait for apiserver health ...
	I0815 18:18:37.344504   55339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:18:37.525122   55339 system_pods.go:59] 7 kube-system pods found
	I0815 18:18:37.525152   55339 system_pods.go:61] "coredns-6d4b75cb6d-v9w4v" [afa432d7-b799-483c-b5d7-076d7d969134] Running
	I0815 18:18:37.525157   55339 system_pods.go:61] "etcd-test-preload-651099" [e5a63b34-b0c0-4b6c-a21c-f9a57643070c] Running
	I0815 18:18:37.525161   55339 system_pods.go:61] "kube-apiserver-test-preload-651099" [5172debd-1904-4c97-a775-e609ccf35af7] Running
	I0815 18:18:37.525165   55339 system_pods.go:61] "kube-controller-manager-test-preload-651099" [71965642-3896-4d4d-86d9-7eef47c9b494] Running
	I0815 18:18:37.525168   55339 system_pods.go:61] "kube-proxy-l5vhv" [d67a1cc4-0c12-4767-a5d7-2fa970b89f60] Running
	I0815 18:18:37.525172   55339 system_pods.go:61] "kube-scheduler-test-preload-651099" [f3fc236b-d226-4ef3-8831-18843b067ea7] Running
	I0815 18:18:37.525175   55339 system_pods.go:61] "storage-provisioner" [4b518922-07af-44ce-9e4a-5d7d60c842d7] Running
	I0815 18:18:37.525182   55339 system_pods.go:74] duration metric: took 180.666839ms to wait for pod list to return data ...
	I0815 18:18:37.525188   55339 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:18:37.722398   55339 default_sa.go:45] found service account: "default"
	I0815 18:18:37.722425   55339 default_sa.go:55] duration metric: took 197.232029ms for default service account to be created ...
	I0815 18:18:37.722433   55339 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:18:37.925605   55339 system_pods.go:86] 7 kube-system pods found
	I0815 18:18:37.925632   55339 system_pods.go:89] "coredns-6d4b75cb6d-v9w4v" [afa432d7-b799-483c-b5d7-076d7d969134] Running
	I0815 18:18:37.925638   55339 system_pods.go:89] "etcd-test-preload-651099" [e5a63b34-b0c0-4b6c-a21c-f9a57643070c] Running
	I0815 18:18:37.925642   55339 system_pods.go:89] "kube-apiserver-test-preload-651099" [5172debd-1904-4c97-a775-e609ccf35af7] Running
	I0815 18:18:37.925646   55339 system_pods.go:89] "kube-controller-manager-test-preload-651099" [71965642-3896-4d4d-86d9-7eef47c9b494] Running
	I0815 18:18:37.925649   55339 system_pods.go:89] "kube-proxy-l5vhv" [d67a1cc4-0c12-4767-a5d7-2fa970b89f60] Running
	I0815 18:18:37.925652   55339 system_pods.go:89] "kube-scheduler-test-preload-651099" [f3fc236b-d226-4ef3-8831-18843b067ea7] Running
	I0815 18:18:37.925655   55339 system_pods.go:89] "storage-provisioner" [4b518922-07af-44ce-9e4a-5d7d60c842d7] Running
	I0815 18:18:37.925662   55339 system_pods.go:126] duration metric: took 203.224397ms to wait for k8s-apps to be running ...
	I0815 18:18:37.925669   55339 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:18:37.925710   55339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:18:37.940403   55339 system_svc.go:56] duration metric: took 14.729797ms WaitForService to wait for kubelet
	I0815 18:18:37.940425   55339 kubeadm.go:582] duration metric: took 10.12621434s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:18:37.940443   55339 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:18:38.123378   55339 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:18:38.123403   55339 node_conditions.go:123] node cpu capacity is 2
	I0815 18:18:38.123415   55339 node_conditions.go:105] duration metric: took 182.967688ms to run NodePressure ...
	I0815 18:18:38.123428   55339 start.go:241] waiting for startup goroutines ...
	I0815 18:18:38.123436   55339 start.go:246] waiting for cluster config update ...
	I0815 18:18:38.123448   55339 start.go:255] writing updated cluster config ...
	I0815 18:18:38.123689   55339 ssh_runner.go:195] Run: rm -f paused
	I0815 18:18:38.170323   55339 start.go:600] kubectl: 1.31.0, cluster: 1.24.4 (minor skew: 7)
	I0815 18:18:38.172118   55339 out.go:201] 
	W0815 18:18:38.173426   55339 out.go:270] ! /usr/local/bin/kubectl is version 1.31.0, which may have incompatibilities with Kubernetes 1.24.4.
	I0815 18:18:38.174682   55339 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0815 18:18:38.176074   55339 out.go:177] * Done! kubectl is now configured to use "test-preload-651099" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.083631506Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8443dd46-63f9-4da9-a32c-5c08b023ba39 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.084765461Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c421611-e1b1-464b-aa27-95273c3791bf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.085259636Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745919085231022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c421611-e1b1-464b-aa27-95273c3791bf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.086091687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04f8c378-9352-4332-9004-c08a36e5cddc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.086141714Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04f8c378-9352-4332-9004-c08a36e5cddc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.086338844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f34c8ab52ca9d8d5313bac416fa38dc2da39ca9eb6f3e64155b6c09f71b2f2a,PodSandboxId:cdb84e288e24bac90c404aa3f5b59d5f0c11a98765dadf5c82f87d1897feeb8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1723745913115022399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-v9w4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa432d7-b799-483c-b5d7-076d7d969134,},Annotations:map[string]string{io.kubernetes.container.hash: 4d29e3a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca761380053969267b6ee0fa63723515955ca010483e59160ce73a63b4799ff,PodSandboxId:e9beec955d190369dd4c34a1461e189c9957532ae807f25aac4424e4c8681913,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1723745906029659722,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5vhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d67a1cc4-0c12-4767-a5d7-2fa970b89f60,},Annotations:map[string]string{io.kubernetes.container.hash: 8457aeaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e48a6cc5bab43923ee5560792dc514c07a22eb4771a786b0a5b0aa0445ee9dc0,PodSandboxId:b9a65314d6ef44bb4e60b00eb0b4a2332166d559a3fdbeda25a0fa5a1910dc0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723745905955018415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b
518922-07af-44ce-9e4a-5d7d60c842d7,},Annotations:map[string]string{io.kubernetes.container.hash: f8e811c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00242dbc3b137b93961487bdb7b3ac62c46a2b62866c521b6f83ff9350178ecf,PodSandboxId:fa063aa3ccb9251c2f41c37e5f665fb5ba763de7fa8506b6c70fed33a29e71e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1723745905100741366,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 563193e5424b59a6c4efc949e014a0fe,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f088bb9478950b3caeaf35937cbf61ef3c50c36861e9f7d94a7412dbbb0761,PodSandboxId:1688a4c4d2b516739145c44d9b46a7be81a1a10003f901d881f4fc1d1ae5e0a7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1723745901066161203,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: da8301c937a73e4d45f71cc2344f3f86,},Annotations:map[string]string{io.kubernetes.container.hash: c8267ae7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b717e006f7f0ddc7443030ab65afa79563b37aa29913edc7d69a5bc1e399a78,PodSandboxId:c7a947f99d62c9154b8527308ea32a2b278c89d4e2b04b9feb6ba9f8ee132d1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1723745899276175373,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3f258d971bb4c6f26729132012b26f5,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c6f0d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc02735a720280b0b9ae9aa1b2c60b210e873a40ff4322c3b1c0d4e280be368,PodSandboxId:fa063aa3ccb9251c2f41c37e5f665fb5ba763de7fa8506b6c70fed33a29e71e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1723745879581028361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 563193e5424b59a6c4e
fc949e014a0fe,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b3a204405193fd1a0f41444413e6dd004360117356a51441bf6d9cabc7d7cf,PodSandboxId:1688a4c4d2b516739145c44d9b46a7be81a1a10003f901d881f4fc1d1ae5e0a7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1723745879546687363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8301c937a73e4d45f71cc2344f3f8
6,},Annotations:map[string]string{io.kubernetes.container.hash: c8267ae7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7060f0038f427a69457446000a40dedd72e00c72385219e4c8785bd9720974,PodSandboxId:59da5a581734044875a5b28d44a1a01cb561abd44ca714e3249c76ff25d42b94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1723745879528013345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b980ab577efbb1c2fd6d8fa150fb8d,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04f8c378-9352-4332-9004-c08a36e5cddc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.126701146Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d6b0e715-b7d0-4851-a9b9-7c9782b8b4d2 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.126768564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d6b0e715-b7d0-4851-a9b9-7c9782b8b4d2 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.127606733Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c97af919-6bef-4e8d-9f22-c5a79d8ec020 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.128086236Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745919128059433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c97af919-6bef-4e8d-9f22-c5a79d8ec020 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.128806149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ad483b5c-6e75-4ae3-bc6a-07e8552f0889 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.128931899Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ad483b5c-6e75-4ae3-bc6a-07e8552f0889 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.129129627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f34c8ab52ca9d8d5313bac416fa38dc2da39ca9eb6f3e64155b6c09f71b2f2a,PodSandboxId:cdb84e288e24bac90c404aa3f5b59d5f0c11a98765dadf5c82f87d1897feeb8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1723745913115022399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-v9w4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa432d7-b799-483c-b5d7-076d7d969134,},Annotations:map[string]string{io.kubernetes.container.hash: 4d29e3a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca761380053969267b6ee0fa63723515955ca010483e59160ce73a63b4799ff,PodSandboxId:e9beec955d190369dd4c34a1461e189c9957532ae807f25aac4424e4c8681913,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1723745906029659722,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5vhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d67a1cc4-0c12-4767-a5d7-2fa970b89f60,},Annotations:map[string]string{io.kubernetes.container.hash: 8457aeaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e48a6cc5bab43923ee5560792dc514c07a22eb4771a786b0a5b0aa0445ee9dc0,PodSandboxId:b9a65314d6ef44bb4e60b00eb0b4a2332166d559a3fdbeda25a0fa5a1910dc0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723745905955018415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b
518922-07af-44ce-9e4a-5d7d60c842d7,},Annotations:map[string]string{io.kubernetes.container.hash: f8e811c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00242dbc3b137b93961487bdb7b3ac62c46a2b62866c521b6f83ff9350178ecf,PodSandboxId:fa063aa3ccb9251c2f41c37e5f665fb5ba763de7fa8506b6c70fed33a29e71e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1723745905100741366,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 563193e5424b59a6c4efc949e014a0fe,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f088bb9478950b3caeaf35937cbf61ef3c50c36861e9f7d94a7412dbbb0761,PodSandboxId:1688a4c4d2b516739145c44d9b46a7be81a1a10003f901d881f4fc1d1ae5e0a7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1723745901066161203,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: da8301c937a73e4d45f71cc2344f3f86,},Annotations:map[string]string{io.kubernetes.container.hash: c8267ae7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b717e006f7f0ddc7443030ab65afa79563b37aa29913edc7d69a5bc1e399a78,PodSandboxId:c7a947f99d62c9154b8527308ea32a2b278c89d4e2b04b9feb6ba9f8ee132d1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1723745899276175373,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3f258d971bb4c6f26729132012b26f5,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c6f0d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc02735a720280b0b9ae9aa1b2c60b210e873a40ff4322c3b1c0d4e280be368,PodSandboxId:fa063aa3ccb9251c2f41c37e5f665fb5ba763de7fa8506b6c70fed33a29e71e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1723745879581028361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 563193e5424b59a6c4e
fc949e014a0fe,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b3a204405193fd1a0f41444413e6dd004360117356a51441bf6d9cabc7d7cf,PodSandboxId:1688a4c4d2b516739145c44d9b46a7be81a1a10003f901d881f4fc1d1ae5e0a7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1723745879546687363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8301c937a73e4d45f71cc2344f3f8
6,},Annotations:map[string]string{io.kubernetes.container.hash: c8267ae7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7060f0038f427a69457446000a40dedd72e00c72385219e4c8785bd9720974,PodSandboxId:59da5a581734044875a5b28d44a1a01cb561abd44ca714e3249c76ff25d42b94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1723745879528013345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b980ab577efbb1c2fd6d8fa150fb8d,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ad483b5c-6e75-4ae3-bc6a-07e8552f0889 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.160332377Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=9159695c-2cc5-4fa2-8dc0-4c2be2ea0be5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.160547957Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:cdb84e288e24bac90c404aa3f5b59d5f0c11a98765dadf5c82f87d1897feeb8f,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-v9w4v,Uid:afa432d7-b799-483c-b5d7-076d7d969134,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723745912889905205,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-v9w4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa432d7-b799-483c-b5d7-076d7d969134,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T18:18:24.898933583Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e9beec955d190369dd4c34a1461e189c9957532ae807f25aac4424e4c8681913,Metadata:&PodSandboxMetadata{Name:kube-proxy-l5vhv,Uid:d67a1cc4-0c12-4767-a5d7-2fa970b89f60,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1723745905809228025,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-l5vhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d67a1cc4-0c12-4767-a5d7-2fa970b89f60,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T18:18:24.898938411Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b9a65314d6ef44bb4e60b00eb0b4a2332166d559a3fdbeda25a0fa5a1910dc0a,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:4b518922-07af-44ce-9e4a-5d7d60c842d7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723745905804577170,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b518922-07af-44ce-9e4a-5d7d
60c842d7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-08-15T18:18:24.898940049Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c7a947f99d62c9154b8527308ea32a2b278c89d4e2b04b9feb6ba9f8ee132d1f,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-651099,Uid:a3f258d971bb4c6f2
6729132012b26f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723745899183638635,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3f258d971bb4c6f26729132012b26f5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.43:2379,kubernetes.io/config.hash: a3f258d971bb4c6f26729132012b26f5,kubernetes.io/config.seen: 2024-08-15T18:18:18.869804104Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fa063aa3ccb9251c2f41c37e5f665fb5ba763de7fa8506b6c70fed33a29e71e9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-651099,Uid:563193e5424b59a6c4efc949e014a0fe,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723745879417556146,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-
controller-manager-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 563193e5424b59a6c4efc949e014a0fe,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 563193e5424b59a6c4efc949e014a0fe,kubernetes.io/config.seen: 2024-08-15T18:17:58.879925177Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1688a4c4d2b516739145c44d9b46a7be81a1a10003f901d881f4fc1d1ae5e0a7,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-651099,Uid:da8301c937a73e4d45f71cc2344f3f86,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723745879408682144,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8301c937a73e4d45f71cc2344f3f86,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.43:8443,kubernetes.io/con
fig.hash: da8301c937a73e4d45f71cc2344f3f86,kubernetes.io/config.seen: 2024-08-15T18:17:58.879898434Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:59da5a581734044875a5b28d44a1a01cb561abd44ca714e3249c76ff25d42b94,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-651099,Uid:30b980ab577efbb1c2fd6d8fa150fb8d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723745879398167368,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b980ab577efbb1c2fd6d8fa150fb8d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 30b980ab577efbb1c2fd6d8fa150fb8d,kubernetes.io/config.seen: 2024-08-15T18:17:58.879930958Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9159695c-2cc5-4fa2-8dc0-4c2be2ea0be5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.161183245Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f75a6528-5e69-4937-b1dc-8bdc962bfc57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.161235346Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f75a6528-5e69-4937-b1dc-8bdc962bfc57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.161418825Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f34c8ab52ca9d8d5313bac416fa38dc2da39ca9eb6f3e64155b6c09f71b2f2a,PodSandboxId:cdb84e288e24bac90c404aa3f5b59d5f0c11a98765dadf5c82f87d1897feeb8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1723745913115022399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-v9w4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa432d7-b799-483c-b5d7-076d7d969134,},Annotations:map[string]string{io.kubernetes.container.hash: 4d29e3a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca761380053969267b6ee0fa63723515955ca010483e59160ce73a63b4799ff,PodSandboxId:e9beec955d190369dd4c34a1461e189c9957532ae807f25aac4424e4c8681913,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1723745906029659722,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5vhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d67a1cc4-0c12-4767-a5d7-2fa970b89f60,},Annotations:map[string]string{io.kubernetes.container.hash: 8457aeaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e48a6cc5bab43923ee5560792dc514c07a22eb4771a786b0a5b0aa0445ee9dc0,PodSandboxId:b9a65314d6ef44bb4e60b00eb0b4a2332166d559a3fdbeda25a0fa5a1910dc0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723745905955018415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b
518922-07af-44ce-9e4a-5d7d60c842d7,},Annotations:map[string]string{io.kubernetes.container.hash: f8e811c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00242dbc3b137b93961487bdb7b3ac62c46a2b62866c521b6f83ff9350178ecf,PodSandboxId:fa063aa3ccb9251c2f41c37e5f665fb5ba763de7fa8506b6c70fed33a29e71e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1723745905100741366,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 563193e5424b59a6c4efc949e014a0fe,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f088bb9478950b3caeaf35937cbf61ef3c50c36861e9f7d94a7412dbbb0761,PodSandboxId:1688a4c4d2b516739145c44d9b46a7be81a1a10003f901d881f4fc1d1ae5e0a7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1723745901066161203,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: da8301c937a73e4d45f71cc2344f3f86,},Annotations:map[string]string{io.kubernetes.container.hash: c8267ae7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b717e006f7f0ddc7443030ab65afa79563b37aa29913edc7d69a5bc1e399a78,PodSandboxId:c7a947f99d62c9154b8527308ea32a2b278c89d4e2b04b9feb6ba9f8ee132d1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1723745899276175373,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3f258d971bb4c6f26729132012b26f5,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c6f0d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc02735a720280b0b9ae9aa1b2c60b210e873a40ff4322c3b1c0d4e280be368,PodSandboxId:fa063aa3ccb9251c2f41c37e5f665fb5ba763de7fa8506b6c70fed33a29e71e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1723745879581028361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 563193e5424b59a6c4e
fc949e014a0fe,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b3a204405193fd1a0f41444413e6dd004360117356a51441bf6d9cabc7d7cf,PodSandboxId:1688a4c4d2b516739145c44d9b46a7be81a1a10003f901d881f4fc1d1ae5e0a7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1723745879546687363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8301c937a73e4d45f71cc2344f3f8
6,},Annotations:map[string]string{io.kubernetes.container.hash: c8267ae7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7060f0038f427a69457446000a40dedd72e00c72385219e4c8785bd9720974,PodSandboxId:59da5a581734044875a5b28d44a1a01cb561abd44ca714e3249c76ff25d42b94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1723745879528013345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b980ab577efbb1c2fd6d8fa150fb8d,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f75a6528-5e69-4937-b1dc-8bdc962bfc57 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.164142710Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5b72de4c-5194-4f97-b587-30f450a39ee0 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.164215958Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5b72de4c-5194-4f97-b587-30f450a39ee0 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.165641148Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fe052a21-2630-411c-803a-20e74a6582d3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.166127552Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723745919166107087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fe052a21-2630-411c-803a-20e74a6582d3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.166676848Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0aadeeaf-1a61-46eb-ac16-20e6a634b9c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.166743706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0aadeeaf-1a61-46eb-ac16-20e6a634b9c0 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:18:39 test-preload-651099 crio[688]: time="2024-08-15 18:18:39.166966856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2f34c8ab52ca9d8d5313bac416fa38dc2da39ca9eb6f3e64155b6c09f71b2f2a,PodSandboxId:cdb84e288e24bac90c404aa3f5b59d5f0c11a98765dadf5c82f87d1897feeb8f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1723745913115022399,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-v9w4v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afa432d7-b799-483c-b5d7-076d7d969134,},Annotations:map[string]string{io.kubernetes.container.hash: 4d29e3a6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ca761380053969267b6ee0fa63723515955ca010483e59160ce73a63b4799ff,PodSandboxId:e9beec955d190369dd4c34a1461e189c9957532ae807f25aac4424e4c8681913,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1723745906029659722,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-l5vhv,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: d67a1cc4-0c12-4767-a5d7-2fa970b89f60,},Annotations:map[string]string{io.kubernetes.container.hash: 8457aeaf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e48a6cc5bab43923ee5560792dc514c07a22eb4771a786b0a5b0aa0445ee9dc0,PodSandboxId:b9a65314d6ef44bb4e60b00eb0b4a2332166d559a3fdbeda25a0fa5a1910dc0a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723745905955018415,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b
518922-07af-44ce-9e4a-5d7d60c842d7,},Annotations:map[string]string{io.kubernetes.container.hash: f8e811c6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00242dbc3b137b93961487bdb7b3ac62c46a2b62866c521b6f83ff9350178ecf,PodSandboxId:fa063aa3ccb9251c2f41c37e5f665fb5ba763de7fa8506b6c70fed33a29e71e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1723745905100741366,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 563193e5424b59a6c4efc949e014a0fe,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:45f088bb9478950b3caeaf35937cbf61ef3c50c36861e9f7d94a7412dbbb0761,PodSandboxId:1688a4c4d2b516739145c44d9b46a7be81a1a10003f901d881f4fc1d1ae5e0a7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1723745901066161203,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: da8301c937a73e4d45f71cc2344f3f86,},Annotations:map[string]string{io.kubernetes.container.hash: c8267ae7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b717e006f7f0ddc7443030ab65afa79563b37aa29913edc7d69a5bc1e399a78,PodSandboxId:c7a947f99d62c9154b8527308ea32a2b278c89d4e2b04b9feb6ba9f8ee132d1f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1723745899276175373,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3f258d971bb4c6f26729132012b26f5,}
,Annotations:map[string]string{io.kubernetes.container.hash: 8c6f0d83,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0dc02735a720280b0b9ae9aa1b2c60b210e873a40ff4322c3b1c0d4e280be368,PodSandboxId:fa063aa3ccb9251c2f41c37e5f665fb5ba763de7fa8506b6c70fed33a29e71e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_EXITED,CreatedAt:1723745879581028361,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 563193e5424b59a6c4e
fc949e014a0fe,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b3a204405193fd1a0f41444413e6dd004360117356a51441bf6d9cabc7d7cf,PodSandboxId:1688a4c4d2b516739145c44d9b46a7be81a1a10003f901d881f4fc1d1ae5e0a7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_EXITED,CreatedAt:1723745879546687363,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: da8301c937a73e4d45f71cc2344f3f8
6,},Annotations:map[string]string{io.kubernetes.container.hash: c8267ae7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1f7060f0038f427a69457446000a40dedd72e00c72385219e4c8785bd9720974,PodSandboxId:59da5a581734044875a5b28d44a1a01cb561abd44ca714e3249c76ff25d42b94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1723745879528013345,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-651099,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30b980ab577efbb1c2fd6d8fa150fb8d,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0aadeeaf-1a61-46eb-ac16-20e6a634b9c0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2f34c8ab52ca9       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   6 seconds ago       Running             coredns                   1                   cdb84e288e24b       coredns-6d4b75cb6d-v9w4v
	8ca7613800539       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   13 seconds ago      Running             kube-proxy                1                   e9beec955d190       kube-proxy-l5vhv
	e48a6cc5bab43       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Running             storage-provisioner       1                   b9a65314d6ef4       storage-provisioner
	00242dbc3b137       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   14 seconds ago      Running             kube-controller-manager   2                   fa063aa3ccb92       kube-controller-manager-test-preload-651099
	45f088bb94789       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   18 seconds ago      Running             kube-apiserver            2                   1688a4c4d2b51       kube-apiserver-test-preload-651099
	4b717e006f7f0       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   c7a947f99d62c       etcd-test-preload-651099
	0dc02735a7202       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   39 seconds ago      Exited              kube-controller-manager   1                   fa063aa3ccb92       kube-controller-manager-test-preload-651099
	b1b3a20440519       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   39 seconds ago      Exited              kube-apiserver            1                   1688a4c4d2b51       kube-apiserver-test-preload-651099
	1f7060f0038f4       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   39 seconds ago      Running             kube-scheduler            1                   59da5a5817340       kube-scheduler-test-preload-651099
	
	
	==> coredns [2f34c8ab52ca9d8d5313bac416fa38dc2da39ca9eb6f3e64155b6c09f71b2f2a] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:35409 - 33584 "HINFO IN 5708939370733745096.5951781600047964236. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014410277s
	
	
	==> describe nodes <==
	Name:               test-preload-651099
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-651099
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=test-preload-651099
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T18_16_41_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 18:16:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-651099
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:18:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 18:18:34 +0000   Thu, 15 Aug 2024 18:16:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 18:18:34 +0000   Thu, 15 Aug 2024 18:16:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 18:18:34 +0000   Thu, 15 Aug 2024 18:16:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 18:18:34 +0000   Thu, 15 Aug 2024 18:18:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.43
	  Hostname:    test-preload-651099
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc64860d922e407eb33e32d112871169
	  System UUID:                fc64860d-922e-407e-b33e-32d112871169
	  Boot ID:                    775de7b8-2466-4a96-8d84-821a6f6a56e7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-v9w4v                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     105s
	  kube-system                 etcd-test-preload-651099                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         118s
	  kube-system                 kube-apiserver-test-preload-651099             250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-test-preload-651099    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-l5vhv                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-test-preload-651099             100m (5%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node test-preload-651099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node test-preload-651099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node test-preload-651099 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                108s               kubelet          Node test-preload-651099 status is now: NodeReady
	  Normal  RegisteredNode           106s               node-controller  Node test-preload-651099 event: Registered Node test-preload-651099 in Controller
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  41s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  40s (x8 over 41s)  kubelet          Node test-preload-651099 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x8 over 41s)  kubelet          Node test-preload-651099 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x7 over 41s)  kubelet          Node test-preload-651099 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s                 node-controller  Node test-preload-651099 event: Registered Node test-preload-651099 in Controller
	
	
	==> dmesg <==
	[Aug15 18:17] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050274] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039057] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.758752] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.386718] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.590236] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.522522] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.062346] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063665] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.197105] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.122183] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.285128] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[ +12.761575] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +0.059495] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.851486] systemd-fstab-generator[1139]: Ignoring "noauto" option for root device
	[Aug15 18:18] kauditd_printk_skb: 95 callbacks suppressed
	[ +19.093814] kauditd_printk_skb: 5 callbacks suppressed
	[  +2.712016] systemd-fstab-generator[1872]: Ignoring "noauto" option for root device
	[  +5.028283] kauditd_printk_skb: 55 callbacks suppressed
	
	
	==> etcd [4b717e006f7f0ddc7443030ab65afa79563b37aa29913edc7d69a5bc1e399a78] <==
	{"level":"info","ts":"2024-08-15T18:18:19.412Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"4537875a7ae50e01","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-08-15T18:18:19.412Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-15T18:18:19.413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 switched to configuration voters=(4987603935014751745)"}
	{"level":"info","ts":"2024-08-15T18:18:19.413Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e2f92b1da63e7b06","local-member-id":"4537875a7ae50e01","added-peer-id":"4537875a7ae50e01","added-peer-peer-urls":["https://192.168.39.43:2380"]}
	{"level":"info","ts":"2024-08-15T18:18:19.413Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"e2f92b1da63e7b06","local-member-id":"4537875a7ae50e01","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:18:19.413Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:18:19.414Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T18:18:19.415Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"4537875a7ae50e01","initial-advertise-peer-urls":["https://192.168.39.43:2380"],"listen-peer-urls":["https://192.168.39.43:2380"],"advertise-client-urls":["https://192.168.39.43:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.43:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T18:18:19.415Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T18:18:19.415Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.43:2380"}
	{"level":"info","ts":"2024-08-15T18:18:19.415Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.43:2380"}
	{"level":"info","ts":"2024-08-15T18:18:20.498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T18:18:20.499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T18:18:20.499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 received MsgPreVoteResp from 4537875a7ae50e01 at term 2"}
	{"level":"info","ts":"2024-08-15T18:18:20.499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T18:18:20.499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 received MsgVoteResp from 4537875a7ae50e01 at term 3"}
	{"level":"info","ts":"2024-08-15T18:18:20.499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"4537875a7ae50e01 became leader at term 3"}
	{"level":"info","ts":"2024-08-15T18:18:20.499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 4537875a7ae50e01 elected leader 4537875a7ae50e01 at term 3"}
	{"level":"info","ts":"2024-08-15T18:18:20.499Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"4537875a7ae50e01","local-member-attributes":"{Name:test-preload-651099 ClientURLs:[https://192.168.39.43:2379]}","request-path":"/0/members/4537875a7ae50e01/attributes","cluster-id":"e2f92b1da63e7b06","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T18:18:20.500Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:18:20.501Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:18:20.502Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T18:18:20.502Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.43:2379"}
	{"level":"info","ts":"2024-08-15T18:18:20.502Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T18:18:20.503Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 18:18:39 up 1 min,  0 users,  load average: 0.95, 0.26, 0.09
	Linux test-preload-651099 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [45f088bb9478950b3caeaf35937cbf61ef3c50c36861e9f7d94a7412dbbb0761] <==
	I0815 18:18:24.070098       1 establishing_controller.go:76] Starting EstablishingController
	I0815 18:18:24.070189       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0815 18:18:24.070209       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0815 18:18:24.070320       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0815 18:18:24.118821       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0815 18:18:24.118914       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0815 18:18:24.203103       1 shared_informer.go:262] Caches are synced for node_authorizer
	E0815 18:18:24.210714       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0815 18:18:24.211806       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0815 18:18:24.218971       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0815 18:18:24.231923       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0815 18:18:24.232183       1 cache.go:39] Caches are synced for autoregister controller
	I0815 18:18:24.232332       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0815 18:18:24.234040       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 18:18:24.258104       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 18:18:24.717476       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0815 18:18:25.036612       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 18:18:25.638249       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0815 18:18:25.651302       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0815 18:18:25.703341       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0815 18:18:25.729214       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 18:18:25.738549       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 18:18:26.439023       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0815 18:18:37.174457       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 18:18:37.228138       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [b1b3a204405193fd1a0f41444413e6dd004360117356a51441bf6d9cabc7d7cf] <==
	I0815 18:18:00.374711       1 server.go:558] external host was not specified, using 192.168.39.43
	I0815 18:18:00.376204       1 server.go:158] Version: v1.24.4
	I0815 18:18:00.376283       1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:18:00.754049       1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
	I0815 18:18:00.754689       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 18:18:00.754719       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0815 18:18:00.756362       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0815 18:18:00.756399       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	W0815 18:18:00.760403       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0815 18:18:01.718649       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0815 18:18:01.760834       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0815 18:18:02.719266       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0815 18:18:03.316108       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0815 18:18:04.345832       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0815 18:18:05.842036       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0815 18:18:06.754816       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0815 18:18:10.268611       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0815 18:18:10.690068       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0815 18:18:18.035644       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0815 18:18:18.440240       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	E0815 18:18:20.760202       1 run.go:74] "command failed" err="context deadline exceeded"
	
	
	==> kube-controller-manager [00242dbc3b137b93961487bdb7b3ac62c46a2b62866c521b6f83ff9350178ecf] <==
	I0815 18:18:37.163546       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0815 18:18:37.165994       1 shared_informer.go:262] Caches are synced for stateful set
	I0815 18:18:37.171963       1 shared_informer.go:262] Caches are synced for daemon sets
	I0815 18:18:37.174789       1 shared_informer.go:262] Caches are synced for job
	I0815 18:18:37.177128       1 shared_informer.go:262] Caches are synced for deployment
	I0815 18:18:37.180098       1 shared_informer.go:262] Caches are synced for GC
	I0815 18:18:37.191019       1 shared_informer.go:262] Caches are synced for PVC protection
	I0815 18:18:37.201241       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0815 18:18:37.210071       1 shared_informer.go:262] Caches are synced for persistent volume
	I0815 18:18:37.211405       1 shared_informer.go:262] Caches are synced for endpoint
	I0815 18:18:37.216246       1 shared_informer.go:262] Caches are synced for disruption
	I0815 18:18:37.216304       1 disruption.go:371] Sending events to api server.
	I0815 18:18:37.217065       1 shared_informer.go:262] Caches are synced for taint
	I0815 18:18:37.217198       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0815 18:18:37.217287       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0815 18:18:37.217548       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-651099. Assuming now as a timestamp.
	I0815 18:18:37.217648       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0815 18:18:37.217822       1 event.go:294] "Event occurred" object="test-preload-651099" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-651099 event: Registered Node test-preload-651099 in Controller"
	I0815 18:18:37.223119       1 shared_informer.go:262] Caches are synced for resource quota
	I0815 18:18:37.226943       1 shared_informer.go:262] Caches are synced for ephemeral
	I0815 18:18:37.243401       1 shared_informer.go:262] Caches are synced for resource quota
	I0815 18:18:37.279188       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0815 18:18:37.640296       1 shared_informer.go:262] Caches are synced for garbage collector
	I0815 18:18:37.640390       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0815 18:18:37.683346       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [0dc02735a720280b0b9ae9aa1b2c60b210e873a40ff4322c3b1c0d4e280be368] <==
		/usr/local/go/src/bytes/buffer.go:204 +0x98
	crypto/tls.(*Conn).readFromUntil(0xc000b59500, {0x4d02200?, 0xc0007654b0}, 0x902?)
		/usr/local/go/src/crypto/tls/conn.go:807 +0xe5
	crypto/tls.(*Conn).readRecordOrCCS(0xc000b59500, 0x0)
		/usr/local/go/src/crypto/tls/conn.go:614 +0x116
	crypto/tls.(*Conn).readRecord(...)
		/usr/local/go/src/crypto/tls/conn.go:582
	crypto/tls.(*Conn).Read(0xc000b59500, {0xc000ee7000, 0x1000, 0x91a200?})
		/usr/local/go/src/crypto/tls/conn.go:1285 +0x16f
	bufio.(*Reader).Read(0xc000322420, {0xc0000f10e0, 0x9, 0x936b82?})
		/usr/local/go/src/bufio/bufio.go:236 +0x1b4
	io.ReadAtLeast({0x4cf9b00, 0xc000322420}, {0xc0000f10e0, 0x9, 0x9}, 0x9)
		/usr/local/go/src/io/io.go:331 +0x9a
	io.ReadFull(...)
		/usr/local/go/src/io/io.go:350
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc0000f10e0?, 0x9?, 0xc001ed4780?}, {0x4cf9b00?, 0xc000322420?})
		vendor/golang.org/x/net/http2/frame.go:237 +0x6e
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000f10a0)
		vendor/golang.org/x/net/http2/frame.go:498 +0x95
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc000eeff98)
		vendor/golang.org/x/net/http2/transport.go:2101 +0x130
	k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc0008ff980)
		vendor/golang.org/x/net/http2/transport.go:1997 +0x6f
	created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn
		vendor/golang.org/x/net/http2/transport.go:725 +0xa65
	
	
	==> kube-proxy [8ca761380053969267b6ee0fa63723515955ca010483e59160ce73a63b4799ff] <==
	I0815 18:18:26.356776       1 node.go:163] Successfully retrieved node IP: 192.168.39.43
	I0815 18:18:26.356964       1 server_others.go:138] "Detected node IP" address="192.168.39.43"
	I0815 18:18:26.357141       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0815 18:18:26.425976       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0815 18:18:26.426009       1 server_others.go:206] "Using iptables Proxier"
	I0815 18:18:26.426482       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0815 18:18:26.427056       1 server.go:661] "Version info" version="v1.24.4"
	I0815 18:18:26.427084       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:18:26.428402       1 config.go:317] "Starting service config controller"
	I0815 18:18:26.428780       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0815 18:18:26.428825       1 config.go:226] "Starting endpoint slice config controller"
	I0815 18:18:26.428942       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0815 18:18:26.429747       1 config.go:444] "Starting node config controller"
	I0815 18:18:26.429785       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0815 18:18:26.529930       1 shared_informer.go:262] Caches are synced for node config
	I0815 18:18:26.530007       1 shared_informer.go:262] Caches are synced for service config
	I0815 18:18:26.530035       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1f7060f0038f427a69457446000a40dedd72e00c72385219e4c8785bd9720974] <==
	W0815 18:18:24.199128       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 18:18:24.199135       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0815 18:18:24.199190       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 18:18:24.199197       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0815 18:18:24.199247       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 18:18:24.199253       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0815 18:18:24.199287       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 18:18:24.199293       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0815 18:18:24.199360       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 18:18:24.199368       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0815 18:18:24.202460       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 18:18:24.202644       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0815 18:18:24.203022       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 18:18:24.203331       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0815 18:18:24.206247       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 18:18:24.206288       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0815 18:18:24.206341       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 18:18:24.206351       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0815 18:18:24.206438       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 18:18:24.207494       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0815 18:18:24.209589       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 18:18:24.209626       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0815 18:18:24.209729       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 18:18:24.210967       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0815 18:18:24.244017       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 18:18:24 test-preload-651099 kubelet[1146]: I0815 18:18:24.899117    1146 topology_manager.go:200] "Topology Admit Handler"
	Aug 15 18:18:24 test-preload-651099 kubelet[1146]: I0815 18:18:24.899306    1146 topology_manager.go:200] "Topology Admit Handler"
	Aug 15 18:18:24 test-preload-651099 kubelet[1146]: I0815 18:18:24.899355    1146 topology_manager.go:200] "Topology Admit Handler"
	Aug 15 18:18:24 test-preload-651099 kubelet[1146]: I0815 18:18:24.899390    1146 topology_manager.go:200] "Topology Admit Handler"
	Aug 15 18:18:24 test-preload-651099 kubelet[1146]: E0815 18:18:24.901968    1146 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-v9w4v" podUID=afa432d7-b799-483c-b5d7-076d7d969134
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: I0815 18:18:25.074995    1146 scope.go:110] "RemoveContainer" containerID="0dc02735a720280b0b9ae9aa1b2c60b210e873a40ff4322c3b1c0d4e280be368"
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: I0815 18:18:25.080624    1146 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bllhd\" (UniqueName: \"kubernetes.io/projected/afa432d7-b799-483c-b5d7-076d7d969134-kube-api-access-bllhd\") pod \"coredns-6d4b75cb6d-v9w4v\" (UID: \"afa432d7-b799-483c-b5d7-076d7d969134\") " pod="kube-system/coredns-6d4b75cb6d-v9w4v"
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: I0815 18:18:25.080705    1146 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d67a1cc4-0c12-4767-a5d7-2fa970b89f60-kube-proxy\") pod \"kube-proxy-l5vhv\" (UID: \"d67a1cc4-0c12-4767-a5d7-2fa970b89f60\") " pod="kube-system/kube-proxy-l5vhv"
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: I0815 18:18:25.080754    1146 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d67a1cc4-0c12-4767-a5d7-2fa970b89f60-xtables-lock\") pod \"kube-proxy-l5vhv\" (UID: \"d67a1cc4-0c12-4767-a5d7-2fa970b89f60\") " pod="kube-system/kube-proxy-l5vhv"
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: I0815 18:18:25.080801    1146 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d67a1cc4-0c12-4767-a5d7-2fa970b89f60-lib-modules\") pod \"kube-proxy-l5vhv\" (UID: \"d67a1cc4-0c12-4767-a5d7-2fa970b89f60\") " pod="kube-system/kube-proxy-l5vhv"
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: I0815 18:18:25.080919    1146 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gk9b\" (UniqueName: \"kubernetes.io/projected/d67a1cc4-0c12-4767-a5d7-2fa970b89f60-kube-api-access-4gk9b\") pod \"kube-proxy-l5vhv\" (UID: \"d67a1cc4-0c12-4767-a5d7-2fa970b89f60\") " pod="kube-system/kube-proxy-l5vhv"
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: I0815 18:18:25.080977    1146 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afa432d7-b799-483c-b5d7-076d7d969134-config-volume\") pod \"coredns-6d4b75cb6d-v9w4v\" (UID: \"afa432d7-b799-483c-b5d7-076d7d969134\") " pod="kube-system/coredns-6d4b75cb6d-v9w4v"
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: I0815 18:18:25.081030    1146 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4b518922-07af-44ce-9e4a-5d7d60c842d7-tmp\") pod \"storage-provisioner\" (UID: \"4b518922-07af-44ce-9e4a-5d7d60c842d7\") " pod="kube-system/storage-provisioner"
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: I0815 18:18:25.081094    1146 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w486r\" (UniqueName: \"kubernetes.io/projected/4b518922-07af-44ce-9e4a-5d7d60c842d7-kube-api-access-w486r\") pod \"storage-provisioner\" (UID: \"4b518922-07af-44ce-9e4a-5d7d60c842d7\") " pod="kube-system/storage-provisioner"
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: I0815 18:18:25.081151    1146 reconciler.go:159] "Reconciler: start to sync state"
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: E0815 18:18:25.198299    1146 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: E0815 18:18:25.198424    1146 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/afa432d7-b799-483c-b5d7-076d7d969134-config-volume podName:afa432d7-b799-483c-b5d7-076d7d969134 nodeName:}" failed. No retries permitted until 2024-08-15 18:18:25.698383636 +0000 UTC m=+26.953646777 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/afa432d7-b799-483c-b5d7-076d7d969134-config-volume") pod "coredns-6d4b75cb6d-v9w4v" (UID: "afa432d7-b799-483c-b5d7-076d7d969134") : object "kube-system"/"coredns" not registered
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: E0815 18:18:25.699603    1146 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 15 18:18:25 test-preload-651099 kubelet[1146]: E0815 18:18:25.699664    1146 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/afa432d7-b799-483c-b5d7-076d7d969134-config-volume podName:afa432d7-b799-483c-b5d7-076d7d969134 nodeName:}" failed. No retries permitted until 2024-08-15 18:18:26.699651209 +0000 UTC m=+27.954914330 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/afa432d7-b799-483c-b5d7-076d7d969134-config-volume") pod "coredns-6d4b75cb6d-v9w4v" (UID: "afa432d7-b799-483c-b5d7-076d7d969134") : object "kube-system"/"coredns" not registered
	Aug 15 18:18:26 test-preload-651099 kubelet[1146]: E0815 18:18:26.705923    1146 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 15 18:18:26 test-preload-651099 kubelet[1146]: E0815 18:18:26.706009    1146 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/afa432d7-b799-483c-b5d7-076d7d969134-config-volume podName:afa432d7-b799-483c-b5d7-076d7d969134 nodeName:}" failed. No retries permitted until 2024-08-15 18:18:28.705992554 +0000 UTC m=+29.961255687 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/afa432d7-b799-483c-b5d7-076d7d969134-config-volume") pod "coredns-6d4b75cb6d-v9w4v" (UID: "afa432d7-b799-483c-b5d7-076d7d969134") : object "kube-system"/"coredns" not registered
	Aug 15 18:18:26 test-preload-651099 kubelet[1146]: E0815 18:18:26.976789    1146 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-v9w4v" podUID=afa432d7-b799-483c-b5d7-076d7d969134
	Aug 15 18:18:26 test-preload-651099 kubelet[1146]: I0815 18:18:26.981651    1146 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=82a41408-dd1d-4963-a1cb-c6c98fdb10f6 path="/var/lib/kubelet/pods/82a41408-dd1d-4963-a1cb-c6c98fdb10f6/volumes"
	Aug 15 18:18:28 test-preload-651099 kubelet[1146]: E0815 18:18:28.720159    1146 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 15 18:18:28 test-preload-651099 kubelet[1146]: E0815 18:18:28.720565    1146 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/afa432d7-b799-483c-b5d7-076d7d969134-config-volume podName:afa432d7-b799-483c-b5d7-076d7d969134 nodeName:}" failed. No retries permitted until 2024-08-15 18:18:32.720538212 +0000 UTC m=+33.975801344 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/afa432d7-b799-483c-b5d7-076d7d969134-config-volume") pod "coredns-6d4b75cb6d-v9w4v" (UID: "afa432d7-b799-483c-b5d7-076d7d969134") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [e48a6cc5bab43923ee5560792dc514c07a22eb4771a786b0a5b0aa0445ee9dc0] <==
	I0815 18:18:26.141239       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-651099 -n test-preload-651099
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-651099 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-651099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-651099
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-651099: (1.100395517s)
--- FAIL: TestPreload (274.89s)

                                                
                                    
x
+
TestKubernetesUpgrade (419.05s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-729203 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-729203 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m53.007430621s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-729203] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-729203" primary control-plane node in "kubernetes-upgrade-729203" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 18:20:35.160339   57390 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:20:35.160456   57390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:20:35.160466   57390 out.go:358] Setting ErrFile to fd 2...
	I0815 18:20:35.160471   57390 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:20:35.160658   57390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:20:35.161606   57390 out.go:352] Setting JSON to false
	I0815 18:20:35.162414   57390 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7381,"bootTime":1723738654,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:20:35.162474   57390 start.go:139] virtualization: kvm guest
	I0815 18:20:35.164210   57390 out.go:177] * [kubernetes-upgrade-729203] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:20:35.165836   57390 notify.go:220] Checking for updates...
	I0815 18:20:35.167025   57390 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:20:35.169433   57390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:20:35.171749   57390 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:20:35.172959   57390 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:20:35.174192   57390 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:20:35.175634   57390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:20:35.177151   57390 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:20:35.215862   57390 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 18:20:35.217093   57390 start.go:297] selected driver: kvm2
	I0815 18:20:35.217118   57390 start.go:901] validating driver "kvm2" against <nil>
	I0815 18:20:35.217132   57390 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:20:35.218113   57390 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:20:35.229756   57390 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:20:35.246217   57390 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:20:35.246265   57390 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 18:20:35.246468   57390 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 18:20:35.246493   57390 cni.go:84] Creating CNI manager for ""
	I0815 18:20:35.246500   57390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:20:35.246507   57390 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 18:20:35.246546   57390 start.go:340] cluster config:
	{Name:kubernetes-upgrade-729203 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-729203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:20:35.246628   57390 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:20:35.248342   57390 out.go:177] * Starting "kubernetes-upgrade-729203" primary control-plane node in "kubernetes-upgrade-729203" cluster
	I0815 18:20:35.249673   57390 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:20:35.249715   57390 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:20:35.249736   57390 cache.go:56] Caching tarball of preloaded images
	I0815 18:20:35.249812   57390 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:20:35.249826   57390 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0815 18:20:35.250274   57390 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/config.json ...
	I0815 18:20:35.250306   57390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/config.json: {Name:mk7c5170f4f34a0152e20e536527939624e919e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:20:35.250477   57390 start.go:360] acquireMachinesLock for kubernetes-upgrade-729203: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:20:57.305558   57390 start.go:364] duration metric: took 22.055048323s to acquireMachinesLock for "kubernetes-upgrade-729203"
	I0815 18:20:57.305625   57390 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-729203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-729203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:20:57.305748   57390 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 18:20:57.307897   57390 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 18:20:57.308133   57390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:20:57.308178   57390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:20:57.326098   57390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42565
	I0815 18:20:57.326502   57390 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:20:57.327046   57390 main.go:141] libmachine: Using API Version  1
	I0815 18:20:57.327065   57390 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:20:57.327403   57390 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:20:57.327610   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetMachineName
	I0815 18:20:57.327763   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:20:57.327946   57390 start.go:159] libmachine.API.Create for "kubernetes-upgrade-729203" (driver="kvm2")
	I0815 18:20:57.327973   57390 client.go:168] LocalClient.Create starting
	I0815 18:20:57.328008   57390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem
	I0815 18:20:57.328037   57390 main.go:141] libmachine: Decoding PEM data...
	I0815 18:20:57.328052   57390 main.go:141] libmachine: Parsing certificate...
	I0815 18:20:57.328105   57390 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem
	I0815 18:20:57.328141   57390 main.go:141] libmachine: Decoding PEM data...
	I0815 18:20:57.328152   57390 main.go:141] libmachine: Parsing certificate...
	I0815 18:20:57.328166   57390 main.go:141] libmachine: Running pre-create checks...
	I0815 18:20:57.328174   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .PreCreateCheck
	I0815 18:20:57.328599   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetConfigRaw
	I0815 18:20:57.329004   57390 main.go:141] libmachine: Creating machine...
	I0815 18:20:57.329018   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .Create
	I0815 18:20:57.329167   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Creating KVM machine...
	I0815 18:20:57.330457   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found existing default KVM network
	I0815 18:20:57.331443   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:20:57.331266   57737 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:6a:74:00} reservation:<nil>}
	I0815 18:20:57.332283   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:20:57.332215   57737 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000112e70}
	I0815 18:20:57.332342   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | created network xml: 
	I0815 18:20:57.332363   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | <network>
	I0815 18:20:57.332376   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG |   <name>mk-kubernetes-upgrade-729203</name>
	I0815 18:20:57.332394   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG |   <dns enable='no'/>
	I0815 18:20:57.332403   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG |   
	I0815 18:20:57.332420   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0815 18:20:57.332452   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG |     <dhcp>
	I0815 18:20:57.332480   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0815 18:20:57.332525   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG |     </dhcp>
	I0815 18:20:57.332543   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG |   </ip>
	I0815 18:20:57.332555   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG |   
	I0815 18:20:57.332566   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | </network>
	I0815 18:20:57.332580   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | 
	I0815 18:20:57.337890   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | trying to create private KVM network mk-kubernetes-upgrade-729203 192.168.50.0/24...
	I0815 18:20:57.411426   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | private KVM network mk-kubernetes-upgrade-729203 192.168.50.0/24 created
	I0815 18:20:57.411461   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Setting up store path in /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203 ...
	I0815 18:20:57.411487   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Building disk image from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 18:20:57.411539   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:20:57.411456   57737 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:20:57.411685   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Downloading /home/jenkins/minikube-integration/19450-13013/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 18:20:57.676656   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:20:57.676549   57737 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/id_rsa...
	I0815 18:20:58.083751   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:20:58.083631   57737 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/kubernetes-upgrade-729203.rawdisk...
	I0815 18:20:58.083781   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Writing magic tar header
	I0815 18:20:58.083799   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Writing SSH key tar header
	I0815 18:20:58.083818   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:20:58.083739   57737 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203 ...
	I0815 18:20:58.083839   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203
	I0815 18:20:58.083870   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203 (perms=drwx------)
	I0815 18:20:58.083898   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines (perms=drwxr-xr-x)
	I0815 18:20:58.083915   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube (perms=drwxr-xr-x)
	I0815 18:20:58.083926   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines
	I0815 18:20:58.083939   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:20:58.083960   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013 (perms=drwxrwxr-x)
	I0815 18:20:58.083981   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013
	I0815 18:20:58.083997   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 18:20:58.084007   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Checking permissions on dir: /home/jenkins
	I0815 18:20:58.084020   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 18:20:58.084035   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 18:20:58.084057   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Creating domain...
	I0815 18:20:58.084138   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Checking permissions on dir: /home
	I0815 18:20:58.084176   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Skipping /home - not owner
	I0815 18:20:58.085614   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) define libvirt domain using xml: 
	I0815 18:20:58.085638   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) <domain type='kvm'>
	I0815 18:20:58.085669   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)   <name>kubernetes-upgrade-729203</name>
	I0815 18:20:58.085689   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)   <memory unit='MiB'>2200</memory>
	I0815 18:20:58.085698   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)   <vcpu>2</vcpu>
	I0815 18:20:58.085717   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)   <features>
	I0815 18:20:58.085725   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <acpi/>
	I0815 18:20:58.085731   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <apic/>
	I0815 18:20:58.085740   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <pae/>
	I0815 18:20:58.085745   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     
	I0815 18:20:58.085757   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)   </features>
	I0815 18:20:58.085765   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)   <cpu mode='host-passthrough'>
	I0815 18:20:58.085771   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)   
	I0815 18:20:58.085776   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)   </cpu>
	I0815 18:20:58.085809   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)   <os>
	I0815 18:20:58.085831   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <type>hvm</type>
	I0815 18:20:58.085840   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <boot dev='cdrom'/>
	I0815 18:20:58.085849   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <boot dev='hd'/>
	I0815 18:20:58.085858   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <bootmenu enable='no'/>
	I0815 18:20:58.085870   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)   </os>
	I0815 18:20:58.085879   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)   <devices>
	I0815 18:20:58.085891   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <disk type='file' device='cdrom'>
	I0815 18:20:58.085916   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/boot2docker.iso'/>
	I0815 18:20:58.085932   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)       <target dev='hdc' bus='scsi'/>
	I0815 18:20:58.085945   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)       <readonly/>
	I0815 18:20:58.085961   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     </disk>
	I0815 18:20:58.085972   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <disk type='file' device='disk'>
	I0815 18:20:58.085984   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 18:20:58.085998   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/kubernetes-upgrade-729203.rawdisk'/>
	I0815 18:20:58.086017   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)       <target dev='hda' bus='virtio'/>
	I0815 18:20:58.086030   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     </disk>
	I0815 18:20:58.086040   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <interface type='network'>
	I0815 18:20:58.086055   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)       <source network='mk-kubernetes-upgrade-729203'/>
	I0815 18:20:58.086071   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)       <model type='virtio'/>
	I0815 18:20:58.086090   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     </interface>
	I0815 18:20:58.086113   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <interface type='network'>
	I0815 18:20:58.086137   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)       <source network='default'/>
	I0815 18:20:58.086148   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)       <model type='virtio'/>
	I0815 18:20:58.086156   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     </interface>
	I0815 18:20:58.086177   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <serial type='pty'>
	I0815 18:20:58.086190   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)       <target port='0'/>
	I0815 18:20:58.086202   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     </serial>
	I0815 18:20:58.086215   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <console type='pty'>
	I0815 18:20:58.086224   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)       <target type='serial' port='0'/>
	I0815 18:20:58.086236   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     </console>
	I0815 18:20:58.086252   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     <rng model='virtio'>
	I0815 18:20:58.086267   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)       <backend model='random'>/dev/random</backend>
	I0815 18:20:58.086287   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     </rng>
	I0815 18:20:58.086296   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     
	I0815 18:20:58.086305   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)     
	I0815 18:20:58.086314   57390 main.go:141] libmachine: (kubernetes-upgrade-729203)   </devices>
	I0815 18:20:58.086327   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) </domain>
	I0815 18:20:58.086343   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) 
	I0815 18:20:58.091209   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b0:48:70 in network default
	I0815 18:20:58.091868   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Ensuring networks are active...
	I0815 18:20:58.091889   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:20:58.092641   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Ensuring network default is active
	I0815 18:20:58.093133   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Ensuring network mk-kubernetes-upgrade-729203 is active
	I0815 18:20:58.093581   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Getting domain xml...
	I0815 18:20:58.094367   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Creating domain...
	I0815 18:20:59.480177   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Waiting to get IP...
	I0815 18:20:59.481162   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:20:59.481590   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:20:59.481647   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:20:59.481583   57737 retry.go:31] will retry after 230.400737ms: waiting for machine to come up
	I0815 18:20:59.714285   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:20:59.714830   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:20:59.714862   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:20:59.714750   57737 retry.go:31] will retry after 376.084739ms: waiting for machine to come up
	I0815 18:21:00.092370   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:00.092817   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:21:00.092847   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:21:00.092763   57737 retry.go:31] will retry after 438.292007ms: waiting for machine to come up
	I0815 18:21:00.532376   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:00.532846   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:21:00.532872   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:21:00.532787   57737 retry.go:31] will retry after 389.548448ms: waiting for machine to come up
	I0815 18:21:00.924353   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:00.924899   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:21:00.924927   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:21:00.924849   57737 retry.go:31] will retry after 664.533684ms: waiting for machine to come up
	I0815 18:21:01.591587   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:01.592076   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:21:01.592138   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:21:01.592009   57737 retry.go:31] will retry after 883.333968ms: waiting for machine to come up
	I0815 18:21:02.477117   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:02.477783   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:21:02.477810   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:21:02.477727   57737 retry.go:31] will retry after 857.882684ms: waiting for machine to come up
	I0815 18:21:03.337299   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:03.337669   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:21:03.337697   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:21:03.337607   57737 retry.go:31] will retry after 899.658886ms: waiting for machine to come up
	I0815 18:21:04.238699   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:04.239155   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:21:04.239184   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:21:04.239093   57737 retry.go:31] will retry after 1.847516287s: waiting for machine to come up
	I0815 18:21:06.089262   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:06.089706   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:21:06.089740   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:21:06.089663   57737 retry.go:31] will retry after 2.150963317s: waiting for machine to come up
	I0815 18:21:08.242123   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:08.242609   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:21:08.242639   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:21:08.242560   57737 retry.go:31] will retry after 1.918047447s: waiting for machine to come up
	I0815 18:21:10.161943   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:10.162362   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:21:10.162395   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:21:10.162299   57737 retry.go:31] will retry after 3.092581381s: waiting for machine to come up
	I0815 18:21:13.256678   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:13.257187   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:21:13.257217   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:21:13.257135   57737 retry.go:31] will retry after 4.344358168s: waiting for machine to come up
	I0815 18:21:17.603065   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:17.603473   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find current IP address of domain kubernetes-upgrade-729203 in network mk-kubernetes-upgrade-729203
	I0815 18:21:17.603502   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | I0815 18:21:17.603422   57737 retry.go:31] will retry after 4.514075989s: waiting for machine to come up
	I0815 18:21:22.122978   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.123529   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Found IP for machine: 192.168.50.3
	I0815 18:21:22.123566   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Reserving static IP address...
	I0815 18:21:22.123575   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has current primary IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.123903   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-729203", mac: "52:54:00:b9:2e:4c", ip: "192.168.50.3"} in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.195873   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Getting to WaitForSSH function...
	I0815 18:21:22.195900   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Reserved static IP address: 192.168.50.3
	I0815 18:21:22.195915   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Waiting for SSH to be available...
	I0815 18:21:22.198682   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.199113   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:22.199144   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.199309   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Using SSH client type: external
	I0815 18:21:22.199348   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/id_rsa (-rw-------)
	I0815 18:21:22.199381   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:21:22.199406   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | About to run SSH command:
	I0815 18:21:22.199423   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | exit 0
	I0815 18:21:22.328447   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | SSH cmd err, output: <nil>: 
	I0815 18:21:22.328744   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) KVM machine creation complete!
	I0815 18:21:22.329054   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetConfigRaw
	I0815 18:21:22.329561   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:21:22.329754   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:21:22.329934   57390 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 18:21:22.329947   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetState
	I0815 18:21:22.331245   57390 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 18:21:22.331259   57390 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 18:21:22.331267   57390 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 18:21:22.331274   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:21:22.333825   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.334223   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:22.334249   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.334419   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:21:22.334607   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:22.334818   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:22.335008   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:21:22.335222   57390 main.go:141] libmachine: Using SSH client type: native
	I0815 18:21:22.335469   57390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0815 18:21:22.335481   57390 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 18:21:22.447640   57390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:21:22.447666   57390 main.go:141] libmachine: Detecting the provisioner...
	I0815 18:21:22.447673   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:21:22.450288   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.450719   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:22.450746   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.450875   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:21:22.451084   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:22.451217   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:22.451355   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:21:22.451509   57390 main.go:141] libmachine: Using SSH client type: native
	I0815 18:21:22.451668   57390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0815 18:21:22.451678   57390 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 18:21:22.565381   57390 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 18:21:22.565447   57390 main.go:141] libmachine: found compatible host: buildroot
	I0815 18:21:22.565462   57390 main.go:141] libmachine: Provisioning with buildroot...
	I0815 18:21:22.565473   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetMachineName
	I0815 18:21:22.565744   57390 buildroot.go:166] provisioning hostname "kubernetes-upgrade-729203"
	I0815 18:21:22.565772   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetMachineName
	I0815 18:21:22.565942   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:21:22.568525   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.568878   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:22.568906   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.569052   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:21:22.569233   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:22.569408   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:22.569544   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:21:22.569713   57390 main.go:141] libmachine: Using SSH client type: native
	I0815 18:21:22.569917   57390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0815 18:21:22.569931   57390 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-729203 && echo "kubernetes-upgrade-729203" | sudo tee /etc/hostname
	I0815 18:21:22.696697   57390 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-729203
	
	I0815 18:21:22.696729   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:21:22.699196   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.699537   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:22.699567   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.699683   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:21:22.699865   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:22.700020   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:22.700161   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:21:22.700309   57390 main.go:141] libmachine: Using SSH client type: native
	I0815 18:21:22.700549   57390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0815 18:21:22.700573   57390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-729203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-729203/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-729203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:21:22.825550   57390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:21:22.825578   57390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:21:22.825619   57390 buildroot.go:174] setting up certificates
	I0815 18:21:22.825632   57390 provision.go:84] configureAuth start
	I0815 18:21:22.825648   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetMachineName
	I0815 18:21:22.825961   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetIP
	I0815 18:21:22.828585   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.828903   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:22.828932   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.829069   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:21:22.831041   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.831356   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:22.831384   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.831510   57390 provision.go:143] copyHostCerts
	I0815 18:21:22.831569   57390 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:21:22.831590   57390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:21:22.831662   57390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:21:22.831789   57390 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:21:22.831800   57390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:21:22.831828   57390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:21:22.831918   57390 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:21:22.831928   57390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:21:22.831954   57390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:21:22.832033   57390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-729203 san=[127.0.0.1 192.168.50.3 kubernetes-upgrade-729203 localhost minikube]
	I0815 18:21:22.907868   57390 provision.go:177] copyRemoteCerts
	I0815 18:21:22.907939   57390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:21:22.907968   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:21:22.910884   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.911331   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:22.911369   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:22.911527   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:21:22.911720   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:22.911890   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:21:22.912039   57390 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/id_rsa Username:docker}
	I0815 18:21:22.999083   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:21:23.024343   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0815 18:21:23.049121   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 18:21:23.072914   57390 provision.go:87] duration metric: took 247.268603ms to configureAuth
	I0815 18:21:23.072940   57390 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:21:23.073126   57390 config.go:182] Loaded profile config "kubernetes-upgrade-729203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:21:23.073208   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:21:23.075908   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.076254   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:23.076286   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.076501   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:21:23.076677   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:23.076852   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:23.077057   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:21:23.077215   57390 main.go:141] libmachine: Using SSH client type: native
	I0815 18:21:23.077370   57390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0815 18:21:23.077384   57390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:21:23.644066   57390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:21:23.644094   57390 main.go:141] libmachine: Checking connection to Docker...
	I0815 18:21:23.644104   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetURL
	I0815 18:21:23.645455   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | Using libvirt version 6000000
	I0815 18:21:23.647837   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.648270   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:23.648296   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.648506   57390 main.go:141] libmachine: Docker is up and running!
	I0815 18:21:23.648523   57390 main.go:141] libmachine: Reticulating splines...
	I0815 18:21:23.648530   57390 client.go:171] duration metric: took 26.320547061s to LocalClient.Create
	I0815 18:21:23.648552   57390 start.go:167] duration metric: took 26.320605676s to libmachine.API.Create "kubernetes-upgrade-729203"
	I0815 18:21:23.648564   57390 start.go:293] postStartSetup for "kubernetes-upgrade-729203" (driver="kvm2")
	I0815 18:21:23.648577   57390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:21:23.648597   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:21:23.648868   57390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:21:23.648893   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:21:23.651252   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.651537   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:23.651557   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.651736   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:21:23.651952   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:23.652109   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:21:23.652252   57390 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/id_rsa Username:docker}
	I0815 18:21:23.740961   57390 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:21:23.745707   57390 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:21:23.745728   57390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:21:23.745786   57390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:21:23.745855   57390 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:21:23.745935   57390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:21:23.755883   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:21:23.779082   57390 start.go:296] duration metric: took 130.506414ms for postStartSetup
	I0815 18:21:23.779126   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetConfigRaw
	I0815 18:21:23.786136   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetIP
	I0815 18:21:23.788527   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.788973   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:23.789006   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.789218   57390 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/config.json ...
	I0815 18:21:23.851802   57390 start.go:128] duration metric: took 26.546016106s to createHost
	I0815 18:21:23.851851   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:21:23.854564   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.854950   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:23.854981   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.855086   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:21:23.855304   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:23.855443   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:23.855600   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:21:23.855727   57390 main.go:141] libmachine: Using SSH client type: native
	I0815 18:21:23.855901   57390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0815 18:21:23.855911   57390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:21:23.965115   57390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723746083.942194893
	
	I0815 18:21:23.965138   57390 fix.go:216] guest clock: 1723746083.942194893
	I0815 18:21:23.965148   57390 fix.go:229] Guest: 2024-08-15 18:21:23.942194893 +0000 UTC Remote: 2024-08-15 18:21:23.851831202 +0000 UTC m=+48.743137675 (delta=90.363691ms)
	I0815 18:21:23.965257   57390 fix.go:200] guest clock delta is within tolerance: 90.363691ms
	I0815 18:21:23.965269   57390 start.go:83] releasing machines lock for "kubernetes-upgrade-729203", held for 26.659678713s
	I0815 18:21:23.965303   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:21:23.965594   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetIP
	I0815 18:21:23.968738   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.969060   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:23.969122   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.969252   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:21:23.969790   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:21:23.969931   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:21:23.970021   57390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:21:23.970083   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:21:23.970135   57390 ssh_runner.go:195] Run: cat /version.json
	I0815 18:21:23.970158   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:21:23.972697   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.972971   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:23.973019   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.973042   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.973129   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:21:23.973320   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:23.973434   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:23.973463   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:23.973485   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:21:23.973640   57390 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/id_rsa Username:docker}
	I0815 18:21:23.973691   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:21:23.973832   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:21:23.973989   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:21:23.974184   57390 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/id_rsa Username:docker}
	I0815 18:21:24.075005   57390 ssh_runner.go:195] Run: systemctl --version
	I0815 18:21:24.082320   57390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:21:24.250999   57390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:21:24.257600   57390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:21:24.257671   57390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:21:24.278087   57390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:21:24.278113   57390 start.go:495] detecting cgroup driver to use...
	I0815 18:21:24.278175   57390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:21:24.297381   57390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:21:24.312130   57390 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:21:24.312190   57390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:21:24.325549   57390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:21:24.339174   57390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:21:24.451047   57390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:21:24.592578   57390 docker.go:233] disabling docker service ...
	I0815 18:21:24.592651   57390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:21:24.607146   57390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:21:24.619882   57390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:21:24.755853   57390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:21:24.875630   57390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:21:24.889943   57390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:21:24.908041   57390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 18:21:24.908108   57390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:21:24.917913   57390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:21:24.917989   57390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:21:24.928499   57390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:21:24.941939   57390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:21:24.955004   57390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:21:24.966347   57390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:21:24.976271   57390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:21:24.976316   57390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:21:24.989884   57390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:21:25.001569   57390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:21:25.113227   57390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:21:25.258776   57390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:21:25.258862   57390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:21:25.264963   57390 start.go:563] Will wait 60s for crictl version
	I0815 18:21:25.265022   57390 ssh_runner.go:195] Run: which crictl
	I0815 18:21:25.269884   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:21:25.307290   57390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:21:25.307377   57390 ssh_runner.go:195] Run: crio --version
	I0815 18:21:25.339848   57390 ssh_runner.go:195] Run: crio --version
	I0815 18:21:25.368769   57390 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 18:21:25.370122   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetIP
	I0815 18:21:25.373405   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:25.373779   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:21:12 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:21:25.373814   57390 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:21:25.374036   57390 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 18:21:25.378727   57390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:21:25.392952   57390 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-729203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-729203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiza
tions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:21:25.393057   57390 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:21:25.393103   57390 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:21:25.434803   57390 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:21:25.434862   57390 ssh_runner.go:195] Run: which lz4
	I0815 18:21:25.439585   57390 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:21:25.444442   57390 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:21:25.444477   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 18:21:27.146881   57390 crio.go:462] duration metric: took 1.707344507s to copy over tarball
	I0815 18:21:27.146974   57390 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:21:29.844647   57390 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.697634609s)
	I0815 18:21:29.844675   57390 crio.go:469] duration metric: took 2.69776417s to extract the tarball
	I0815 18:21:29.844684   57390 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:21:29.887794   57390 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:21:29.936170   57390 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:21:29.936200   57390 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:21:29.936288   57390 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:21:29.936290   57390 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:21:29.936367   57390 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:21:29.936406   57390 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 18:21:29.936406   57390 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:21:29.936315   57390 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:21:29.936292   57390 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:21:29.936379   57390 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 18:21:29.938077   57390 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 18:21:29.938089   57390 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:21:29.938102   57390 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 18:21:29.938090   57390 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:21:29.938097   57390 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:21:29.938138   57390 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:21:29.938154   57390 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:21:29.938155   57390 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:21:30.209946   57390 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 18:21:30.231622   57390 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 18:21:30.253143   57390 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 18:21:30.253185   57390 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 18:21:30.253223   57390 ssh_runner.go:195] Run: which crictl
	I0815 18:21:30.288336   57390 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 18:21:30.288371   57390 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 18:21:30.288387   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:21:30.288409   57390 ssh_runner.go:195] Run: which crictl
	I0815 18:21:30.305226   57390 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:21:30.306897   57390 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 18:21:30.313718   57390 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:21:30.356451   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:21:30.356510   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:21:30.363888   57390 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:21:30.372896   57390 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:21:30.388384   57390 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 18:21:30.388427   57390 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:21:30.388470   57390 ssh_runner.go:195] Run: which crictl
	I0815 18:21:30.474769   57390 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 18:21:30.474810   57390 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:21:30.474869   57390 ssh_runner.go:195] Run: which crictl
	I0815 18:21:30.500034   57390 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 18:21:30.500077   57390 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:21:30.500123   57390 ssh_runner.go:195] Run: which crictl
	I0815 18:21:30.519656   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:21:30.519666   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:21:30.519704   57390 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 18:21:30.519733   57390 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:21:30.519758   57390 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 18:21:30.519771   57390 ssh_runner.go:195] Run: which crictl
	I0815 18:21:30.519786   57390 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:21:30.519820   57390 ssh_runner.go:195] Run: which crictl
	I0815 18:21:30.519850   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:21:30.519887   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:21:30.523058   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:21:30.635868   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:21:30.635908   57390 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 18:21:30.635969   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:21:30.635979   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:21:30.636395   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:21:30.636516   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:21:30.658659   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:21:30.744846   57390 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:21:30.774202   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:21:30.778188   57390 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 18:21:30.778283   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:21:30.778380   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:21:30.778474   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:21:30.788844   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:21:30.991521   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:21:30.991623   57390 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:21:30.991680   57390 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 18:21:30.991729   57390 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 18:21:30.991803   57390 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 18:21:31.037520   57390 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 18:21:31.037595   57390 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 18:21:31.037657   57390 cache_images.go:92] duration metric: took 1.101441478s to LoadCachedImages
	W0815 18:21:31.037809   57390 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2: no such file or directory
	I0815 18:21:31.037828   57390 kubeadm.go:934] updating node { 192.168.50.3 8443 v1.20.0 crio true true} ...
	I0815 18:21:31.037927   57390 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-729203 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-729203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:21:31.037984   57390 ssh_runner.go:195] Run: crio config
	I0815 18:21:31.083372   57390 cni.go:84] Creating CNI manager for ""
	I0815 18:21:31.083407   57390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:21:31.083428   57390 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:21:31.083456   57390 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.3 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-729203 NodeName:kubernetes-upgrade-729203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 18:21:31.083658   57390 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-729203"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:21:31.083734   57390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 18:21:31.094249   57390 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:21:31.094305   57390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:21:31.104064   57390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (431 bytes)
	I0815 18:21:31.120797   57390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:21:31.137414   57390 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 18:21:31.154193   57390 ssh_runner.go:195] Run: grep 192.168.50.3	control-plane.minikube.internal$ /etc/hosts
	I0815 18:21:31.158515   57390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:21:31.170985   57390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:21:31.300992   57390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:21:31.319201   57390 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203 for IP: 192.168.50.3
	I0815 18:21:31.319226   57390 certs.go:194] generating shared ca certs ...
	I0815 18:21:31.319246   57390 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:21:31.319442   57390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:21:31.319511   57390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:21:31.319524   57390 certs.go:256] generating profile certs ...
	I0815 18:21:31.319593   57390 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/client.key
	I0815 18:21:31.319612   57390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/client.crt with IP's: []
	I0815 18:21:31.451689   57390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/client.crt ...
	I0815 18:21:31.451717   57390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/client.crt: {Name:mka91a01daae335fded5112b3a698b4784b04f86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:21:31.451899   57390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/client.key ...
	I0815 18:21:31.451920   57390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/client.key: {Name:mk3675ded92a46a65f3cba9ccd6d22a19c6021b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:21:31.452040   57390 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.key.a6902cfa
	I0815 18:21:31.452060   57390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.crt.a6902cfa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.3]
	I0815 18:21:31.632810   57390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.crt.a6902cfa ...
	I0815 18:21:31.632841   57390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.crt.a6902cfa: {Name:mk7f7889f6d8b28082921300f708128c7a6d2778 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:21:31.633026   57390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.key.a6902cfa ...
	I0815 18:21:31.633044   57390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.key.a6902cfa: {Name:mkea1607e0ee87651e61e244219d1968fecbc5c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:21:31.633147   57390 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.crt.a6902cfa -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.crt
	I0815 18:21:31.633220   57390 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.key.a6902cfa -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.key
	I0815 18:21:31.633271   57390 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/proxy-client.key
	I0815 18:21:31.633287   57390 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/proxy-client.crt with IP's: []
	I0815 18:21:31.762246   57390 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/proxy-client.crt ...
	I0815 18:21:31.762278   57390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/proxy-client.crt: {Name:mk4f5c47b09ea9a9f6151e7c200a4499d50168ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:21:31.762472   57390 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/proxy-client.key ...
	I0815 18:21:31.762492   57390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/proxy-client.key: {Name:mk2c568a4fd459002584f0637536a4a1fe87a40e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:21:31.762711   57390 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:21:31.762757   57390 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:21:31.762772   57390 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:21:31.762802   57390 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:21:31.762834   57390 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:21:31.762866   57390 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:21:31.762916   57390 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:21:31.763550   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:21:31.794650   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:21:31.819199   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:21:31.843518   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:21:31.867312   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0815 18:21:31.891406   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 18:21:31.915508   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:21:31.939657   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 18:21:31.963160   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:21:31.986141   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:21:32.010740   57390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:21:32.036707   57390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:21:32.053939   57390 ssh_runner.go:195] Run: openssl version
	I0815 18:21:32.059974   57390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:21:32.070242   57390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:21:32.074585   57390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:21:32.074634   57390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:21:32.080581   57390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:21:32.091026   57390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:21:32.101207   57390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:21:32.105476   57390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:21:32.105522   57390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:21:32.110988   57390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:21:32.121547   57390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:21:32.136338   57390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:21:32.141146   57390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:21:32.141204   57390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:21:32.146987   57390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:21:32.162059   57390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:21:32.166621   57390 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 18:21:32.166678   57390 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-729203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-729203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.3 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizatio
ns:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:21:32.166761   57390 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:21:32.166807   57390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:21:32.221075   57390 cri.go:89] found id: ""
	I0815 18:21:32.221147   57390 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:21:32.234204   57390 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:21:32.244322   57390 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:21:32.253934   57390 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:21:32.253955   57390 kubeadm.go:157] found existing configuration files:
	
	I0815 18:21:32.254004   57390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:21:32.263228   57390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:21:32.263280   57390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:21:32.272628   57390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:21:32.281478   57390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:21:32.281536   57390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:21:32.290534   57390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:21:32.299521   57390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:21:32.299573   57390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:21:32.308660   57390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:21:32.318057   57390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:21:32.318113   57390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:21:32.328968   57390 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:21:32.604409   57390 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:23:30.031489   57390 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:23:30.031582   57390 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 18:23:30.033781   57390 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:23:30.033850   57390 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:23:30.033939   57390 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:23:30.034053   57390 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:23:30.034171   57390 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:23:30.034249   57390 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:23:30.036202   57390 out.go:235]   - Generating certificates and keys ...
	I0815 18:23:30.036310   57390 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:23:30.036407   57390 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:23:30.036526   57390 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 18:23:30.036597   57390 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 18:23:30.036647   57390 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 18:23:30.036732   57390 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 18:23:30.036832   57390 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 18:23:30.036986   57390 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-729203 localhost] and IPs [192.168.50.3 127.0.0.1 ::1]
	I0815 18:23:30.037035   57390 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 18:23:30.037244   57390 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-729203 localhost] and IPs [192.168.50.3 127.0.0.1 ::1]
	I0815 18:23:30.037344   57390 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 18:23:30.037445   57390 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 18:23:30.037508   57390 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 18:23:30.037587   57390 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:23:30.037659   57390 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:23:30.037733   57390 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:23:30.037820   57390 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:23:30.037919   57390 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:23:30.038119   57390 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:23:30.038246   57390 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:23:30.038311   57390 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:23:30.038405   57390 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:23:30.039855   57390 out.go:235]   - Booting up control plane ...
	I0815 18:23:30.039948   57390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:23:30.040035   57390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:23:30.040114   57390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:23:30.040213   57390 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:23:30.040446   57390 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:23:30.040555   57390 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:23:30.040644   57390 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:23:30.040793   57390 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:23:30.040902   57390 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:23:30.041163   57390 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:23:30.041247   57390 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:23:30.041506   57390 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:23:30.041623   57390 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:23:30.041819   57390 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:23:30.041882   57390 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:23:30.042057   57390 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:23:30.042064   57390 kubeadm.go:310] 
	I0815 18:23:30.042097   57390 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:23:30.042132   57390 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:23:30.042138   57390 kubeadm.go:310] 
	I0815 18:23:30.042180   57390 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:23:30.042210   57390 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:23:30.042338   57390 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:23:30.042347   57390 kubeadm.go:310] 
	I0815 18:23:30.042453   57390 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:23:30.042504   57390 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:23:30.042548   57390 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:23:30.042562   57390 kubeadm.go:310] 
	I0815 18:23:30.042713   57390 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:23:30.042782   57390 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:23:30.042788   57390 kubeadm.go:310] 
	I0815 18:23:30.042872   57390 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:23:30.042945   57390 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:23:30.043007   57390 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:23:30.043066   57390 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W0815 18:23:30.043186   57390 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-729203 localhost] and IPs [192.168.50.3 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-729203 localhost] and IPs [192.168.50.3 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-729203 localhost] and IPs [192.168.50.3 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-729203 localhost] and IPs [192.168.50.3 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 18:23:30.043228   57390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:23:30.043436   57390 kubeadm.go:310] 
	I0815 18:23:31.118021   57390 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.074766002s)
	I0815 18:23:31.118094   57390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:23:31.133696   57390 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:23:31.144767   57390 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:23:31.144787   57390 kubeadm.go:157] found existing configuration files:
	
	I0815 18:23:31.144834   57390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:23:31.156189   57390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:23:31.156254   57390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:23:31.168182   57390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:23:31.178354   57390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:23:31.178416   57390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:23:31.188875   57390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:23:31.198663   57390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:23:31.198718   57390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:23:31.209631   57390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:23:31.219804   57390 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:23:31.219871   57390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:23:31.230544   57390 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:23:31.308014   57390 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:23:31.308134   57390 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:23:31.475539   57390 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:23:31.475678   57390 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:23:31.475803   57390 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:23:31.708242   57390 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:23:31.710256   57390 out.go:235]   - Generating certificates and keys ...
	I0815 18:23:31.710359   57390 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:23:31.710454   57390 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:23:31.710573   57390 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:23:31.710680   57390 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:23:31.710799   57390 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:23:31.710875   57390 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:23:31.710963   57390 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:23:31.711033   57390 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:23:31.711124   57390 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:23:31.711212   57390 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:23:31.711251   57390 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:23:31.711305   57390 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:23:31.830035   57390 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:23:31.985861   57390 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:23:32.104479   57390 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:23:32.222050   57390 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:23:32.241207   57390 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:23:32.242410   57390 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:23:32.242524   57390 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:23:32.411587   57390 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:23:32.413776   57390 out.go:235]   - Booting up control plane ...
	I0815 18:23:32.413914   57390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:23:32.422000   57390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:23:32.432038   57390 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:23:32.434269   57390 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:23:32.438520   57390 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:24:12.440362   57390 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:24:12.440710   57390 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:24:12.440873   57390 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:24:17.441545   57390 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:24:17.441730   57390 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:24:27.442441   57390 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:24:27.442749   57390 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:24:47.441872   57390 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:24:47.442176   57390 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:25:27.441938   57390 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:25:27.442195   57390 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:25:27.442220   57390 kubeadm.go:310] 
	I0815 18:25:27.442276   57390 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:25:27.442341   57390 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:25:27.442350   57390 kubeadm.go:310] 
	I0815 18:25:27.442410   57390 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:25:27.442458   57390 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:25:27.442591   57390 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:25:27.442603   57390 kubeadm.go:310] 
	I0815 18:25:27.442717   57390 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:25:27.442766   57390 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:25:27.442814   57390 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:25:27.442827   57390 kubeadm.go:310] 
	I0815 18:25:27.443052   57390 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:25:27.443226   57390 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:25:27.443249   57390 kubeadm.go:310] 
	I0815 18:25:27.443394   57390 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:25:27.443567   57390 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:25:27.443679   57390 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:25:27.443777   57390 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:25:27.443791   57390 kubeadm.go:310] 
	I0815 18:25:27.443996   57390 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:25:27.444146   57390 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:25:27.444236   57390 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 18:25:27.444304   57390 kubeadm.go:394] duration metric: took 3m55.277631624s to StartCluster
	I0815 18:25:27.444349   57390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:25:27.444414   57390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:25:27.494254   57390 cri.go:89] found id: ""
	I0815 18:25:27.494292   57390 logs.go:276] 0 containers: []
	W0815 18:25:27.494302   57390 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:25:27.494307   57390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:25:27.494390   57390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:25:27.531939   57390 cri.go:89] found id: ""
	I0815 18:25:27.531970   57390 logs.go:276] 0 containers: []
	W0815 18:25:27.531980   57390 logs.go:278] No container was found matching "etcd"
	I0815 18:25:27.531992   57390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:25:27.532050   57390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:25:27.569824   57390 cri.go:89] found id: ""
	I0815 18:25:27.569858   57390 logs.go:276] 0 containers: []
	W0815 18:25:27.569866   57390 logs.go:278] No container was found matching "coredns"
	I0815 18:25:27.569872   57390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:25:27.569939   57390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:25:27.616551   57390 cri.go:89] found id: ""
	I0815 18:25:27.616572   57390 logs.go:276] 0 containers: []
	W0815 18:25:27.616579   57390 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:25:27.616586   57390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:25:27.616640   57390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:25:27.653784   57390 cri.go:89] found id: ""
	I0815 18:25:27.653811   57390 logs.go:276] 0 containers: []
	W0815 18:25:27.653821   57390 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:25:27.653829   57390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:25:27.653897   57390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:25:27.688536   57390 cri.go:89] found id: ""
	I0815 18:25:27.688565   57390 logs.go:276] 0 containers: []
	W0815 18:25:27.688575   57390 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:25:27.688584   57390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:25:27.688662   57390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:25:27.728934   57390 cri.go:89] found id: ""
	I0815 18:25:27.728956   57390 logs.go:276] 0 containers: []
	W0815 18:25:27.728967   57390 logs.go:278] No container was found matching "kindnet"
	I0815 18:25:27.728976   57390 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:25:27.728990   57390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:25:27.850981   57390 logs.go:123] Gathering logs for container status ...
	I0815 18:25:27.851023   57390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:25:27.894758   57390 logs.go:123] Gathering logs for kubelet ...
	I0815 18:25:27.894790   57390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:25:27.946336   57390 logs.go:123] Gathering logs for dmesg ...
	I0815 18:25:27.946368   57390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:25:27.961285   57390 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:25:27.961311   57390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:25:28.097702   57390 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0815 18:25:28.097731   57390 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 18:25:28.097771   57390 out.go:270] * 
	* 
	W0815 18:25:28.097828   57390 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:25:28.097847   57390 out.go:270] * 
	* 
	W0815 18:25:28.098808   57390 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 18:25:28.102704   57390 out.go:201] 
	W0815 18:25:28.104025   57390 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:25:28.104082   57390 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 18:25:28.104101   57390 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 18:25:28.106567   57390 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-729203 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-729203
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-729203: (6.668745125s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-729203 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-729203 status --format={{.Host}}: exit status 7 (84.688128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-729203 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-729203 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m1.679163348s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-729203 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-729203 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-729203 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (77.109253ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-729203] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-729203
	    minikube start -p kubernetes-upgrade-729203 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7292032 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-729203 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-729203 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-729203 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.318453531s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-08-15 18:27:30.052707917 +0000 UTC m=+4941.030813094
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-729203 -n kubernetes-upgrade-729203
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-729203 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-729203 logs -n 25: (1.748402481s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-443473 sudo                  | cilium-443473             | jenkins | v1.33.1 | 15 Aug 24 18:24 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-443473 sudo cat              | cilium-443473             | jenkins | v1.33.1 | 15 Aug 24 18:24 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-443473 sudo cat              | cilium-443473             | jenkins | v1.33.1 | 15 Aug 24 18:24 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-443473 sudo                  | cilium-443473             | jenkins | v1.33.1 | 15 Aug 24 18:24 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-443473 sudo                  | cilium-443473             | jenkins | v1.33.1 | 15 Aug 24 18:24 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-443473 sudo                  | cilium-443473             | jenkins | v1.33.1 | 15 Aug 24 18:24 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-443473 sudo find             | cilium-443473             | jenkins | v1.33.1 | 15 Aug 24 18:24 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-443473 sudo crio             | cilium-443473             | jenkins | v1.33.1 | 15 Aug 24 18:24 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-443473                       | cilium-443473             | jenkins | v1.33.1 | 15 Aug 24 18:24 UTC | 15 Aug 24 18:24 UTC |
	| start   | -p force-systemd-flag-975168           | force-systemd-flag-975168 | jenkins | v1.33.1 | 15 Aug 24 18:24 UTC | 15 Aug 24 18:25 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-618999            | force-systemd-env-618999  | jenkins | v1.33.1 | 15 Aug 24 18:24 UTC | 15 Aug 24 18:24 UTC |
	| start   | -p cert-options-194487                 | cert-options-194487       | jenkins | v1.33.1 | 15 Aug 24 18:24 UTC | 15 Aug 24 18:25 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-975168 ssh cat      | force-systemd-flag-975168 | jenkins | v1.33.1 | 15 Aug 24 18:25 UTC | 15 Aug 24 18:25 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-975168           | force-systemd-flag-975168 | jenkins | v1.33.1 | 15 Aug 24 18:25 UTC | 15 Aug 24 18:25 UTC |
	| stop    | -p kubernetes-upgrade-729203           | kubernetes-upgrade-729203 | jenkins | v1.33.1 | 15 Aug 24 18:25 UTC | 15 Aug 24 18:25 UTC |
	| start   | -p stopped-upgrade-498665              | minikube                  | jenkins | v1.26.0 | 15 Aug 24 18:25 UTC | 15 Aug 24 18:26 UTC |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-729203           | kubernetes-upgrade-729203 | jenkins | v1.33.1 | 15 Aug 24 18:25 UTC | 15 Aug 24 18:26 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-194487 ssh                | cert-options-194487       | jenkins | v1.33.1 | 15 Aug 24 18:25 UTC | 15 Aug 24 18:25 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-194487 -- sudo         | cert-options-194487       | jenkins | v1.33.1 | 15 Aug 24 18:25 UTC | 15 Aug 24 18:25 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-194487                 | cert-options-194487       | jenkins | v1.33.1 | 15 Aug 24 18:25 UTC | 15 Aug 24 18:25 UTC |
	| start   | -p old-k8s-version-278865              | old-k8s-version-278865    | jenkins | v1.33.1 | 15 Aug 24 18:25 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-498665 stop            | minikube                  | jenkins | v1.26.0 | 15 Aug 24 18:26 UTC | 15 Aug 24 18:26 UTC |
	| start   | -p stopped-upgrade-498665              | stopped-upgrade-498665    | jenkins | v1.33.1 | 15 Aug 24 18:26 UTC | 15 Aug 24 18:27 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-729203           | kubernetes-upgrade-729203 | jenkins | v1.33.1 | 15 Aug 24 18:26 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-729203           | kubernetes-upgrade-729203 | jenkins | v1.33.1 | 15 Aug 24 18:26 UTC | 15 Aug 24 18:27 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:26:36
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:26:36.771344   64974 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:26:36.771443   64974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:26:36.771447   64974 out.go:358] Setting ErrFile to fd 2...
	I0815 18:26:36.771451   64974 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:26:36.771623   64974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:26:36.772208   64974 out.go:352] Setting JSON to false
	I0815 18:26:36.773227   64974 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7743,"bootTime":1723738654,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:26:36.773286   64974 start.go:139] virtualization: kvm guest
	I0815 18:26:36.775350   64974 out.go:177] * [kubernetes-upgrade-729203] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:26:36.776599   64974 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:26:36.776598   64974 notify.go:220] Checking for updates...
	I0815 18:26:36.777872   64974 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:26:36.779210   64974 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:26:36.780607   64974 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:26:36.781836   64974 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:26:36.783052   64974 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:26:36.784668   64974 config.go:182] Loaded profile config "kubernetes-upgrade-729203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:26:36.785068   64974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:26:36.785113   64974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:26:36.800371   64974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0815 18:26:36.800805   64974 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:26:36.801363   64974 main.go:141] libmachine: Using API Version  1
	I0815 18:26:36.801390   64974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:26:36.801665   64974 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:26:36.801833   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:26:36.802056   64974 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:26:36.802383   64974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:26:36.802427   64974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:26:36.817393   64974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37297
	I0815 18:26:36.817789   64974 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:26:36.818313   64974 main.go:141] libmachine: Using API Version  1
	I0815 18:26:36.818343   64974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:26:36.818714   64974 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:26:36.818934   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:26:36.852962   64974 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:26:36.854138   64974 start.go:297] selected driver: kvm2
	I0815 18:26:36.854154   64974 start.go:901] validating driver "kvm2" against &{Name:kubernetes-upgrade-729203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-729203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:26:36.854273   64974 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:26:36.854938   64974 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:26:36.855007   64974 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:26:36.869229   64974 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:26:36.869731   64974 cni.go:84] Creating CNI manager for ""
	I0815 18:26:36.869751   64974 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:26:36.869797   64974 start.go:340] cluster config:
	{Name:kubernetes-upgrade-729203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-729203 Namespace:d
efault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:26:36.869934   64974 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:26:36.871607   64974 out.go:177] * Starting "kubernetes-upgrade-729203" primary control-plane node in "kubernetes-upgrade-729203" cluster
	I0815 18:26:34.765746   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:34.766202   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:34.766222   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:34.766170   64609 retry.go:31] will retry after 4.21053318s: waiting for machine to come up
	I0815 18:26:36.872572   64974 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:26:36.872617   64974 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:26:36.872625   64974 cache.go:56] Caching tarball of preloaded images
	I0815 18:26:36.872690   64974 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:26:36.872700   64974 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 18:26:36.872776   64974 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/config.json ...
	I0815 18:26:36.872940   64974 start.go:360] acquireMachinesLock for kubernetes-upgrade-729203: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:26:38.977877   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:38.978302   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:38.978327   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:38.978262   64609 retry.go:31] will retry after 3.959621261s: waiting for machine to come up
	I0815 18:26:42.941583   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:42.942026   64368 main.go:141] libmachine: (old-k8s-version-278865) Found IP for machine: 192.168.39.89
	I0815 18:26:42.942043   64368 main.go:141] libmachine: (old-k8s-version-278865) Reserving static IP address...
	I0815 18:26:42.942057   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has current primary IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:42.942578   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"} in network mk-old-k8s-version-278865
	I0815 18:26:43.020225   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Getting to WaitForSSH function...
	I0815 18:26:43.020252   64368 main.go:141] libmachine: (old-k8s-version-278865) Reserved static IP address: 192.168.39.89
	I0815 18:26:43.020265   64368 main.go:141] libmachine: (old-k8s-version-278865) Waiting for SSH to be available...
	I0815 18:26:43.023222   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.023781   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.023806   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.024005   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH client type: external
	I0815 18:26:43.024046   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa (-rw-------)
	I0815 18:26:43.024095   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:26:43.024110   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | About to run SSH command:
	I0815 18:26:43.024130   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | exit 0
	I0815 18:26:43.148744   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | SSH cmd err, output: <nil>: 
	I0815 18:26:43.148999   64368 main.go:141] libmachine: (old-k8s-version-278865) KVM machine creation complete!
	I0815 18:26:43.149424   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetConfigRaw
	I0815 18:26:43.149994   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:43.150217   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:43.150431   64368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 18:26:43.150455   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetState
	I0815 18:26:43.151866   64368 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 18:26:43.151888   64368 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 18:26:43.151895   64368 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 18:26:43.151904   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:43.154259   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.154571   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.154603   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.154773   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:43.154949   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.155125   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.155271   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:43.155445   64368 main.go:141] libmachine: Using SSH client type: native
	I0815 18:26:43.155728   64368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:26:43.155747   64368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 18:26:43.260226   64368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:26:43.260249   64368 main.go:141] libmachine: Detecting the provisioner...
	I0815 18:26:43.260257   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:43.263094   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.263460   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.263489   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.263653   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:43.263823   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.264103   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.264297   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:43.264463   64368 main.go:141] libmachine: Using SSH client type: native
	I0815 18:26:43.264665   64368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:26:43.264680   64368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 18:26:43.369481   64368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 18:26:43.369606   64368 main.go:141] libmachine: found compatible host: buildroot
	I0815 18:26:43.369619   64368 main.go:141] libmachine: Provisioning with buildroot...
	I0815 18:26:43.369631   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:26:43.369858   64368 buildroot.go:166] provisioning hostname "old-k8s-version-278865"
	I0815 18:26:43.369885   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:26:43.370058   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:43.372749   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.373121   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.373148   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.373271   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:43.373431   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.373535   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.373632   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:43.373819   64368 main.go:141] libmachine: Using SSH client type: native
	I0815 18:26:43.373984   64368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:26:43.373996   64368 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-278865 && echo "old-k8s-version-278865" | sudo tee /etc/hostname
	I0815 18:26:43.495620   64368 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-278865
	
	I0815 18:26:43.495643   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:43.498687   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.499012   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.499051   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.499196   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:43.499359   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.499523   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.499658   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:43.499819   64368 main.go:141] libmachine: Using SSH client type: native
	I0815 18:26:43.500043   64368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:26:43.500070   64368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-278865' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-278865/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-278865' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:26:43.613817   64368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:26:43.613850   64368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:26:43.613884   64368 buildroot.go:174] setting up certificates
	I0815 18:26:43.613894   64368 provision.go:84] configureAuth start
	I0815 18:26:43.613912   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:26:43.614152   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:26:43.616903   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.617304   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.617338   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.617471   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:43.619662   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.619962   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.619999   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.620141   64368 provision.go:143] copyHostCerts
	I0815 18:26:43.620197   64368 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:26:43.620215   64368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:26:43.620273   64368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:26:43.620445   64368 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:26:43.620455   64368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:26:43.620479   64368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:26:43.620582   64368 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:26:43.620590   64368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:26:43.620609   64368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:26:43.620668   64368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-278865 san=[127.0.0.1 192.168.39.89 localhost minikube old-k8s-version-278865]
	I0815 18:26:44.857581   64827 start.go:364] duration metric: took 15.096467355s to acquireMachinesLock for "stopped-upgrade-498665"
	I0815 18:26:44.857624   64827 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:26:44.857634   64827 fix.go:54] fixHost starting: 
	I0815 18:26:44.858041   64827 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:26:44.858100   64827 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:26:44.875380   64827 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0815 18:26:44.875798   64827 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:26:44.876317   64827 main.go:141] libmachine: Using API Version  1
	I0815 18:26:44.876344   64827 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:26:44.876725   64827 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:26:44.876931   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .DriverName
	I0815 18:26:44.877068   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetState
	I0815 18:26:44.878542   64827 fix.go:112] recreateIfNeeded on stopped-upgrade-498665: state=Stopped err=<nil>
	I0815 18:26:44.878563   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .DriverName
	W0815 18:26:44.878704   64827 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:26:44.880631   64827 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-498665" ...
	I0815 18:26:44.187517   64368 provision.go:177] copyRemoteCerts
	I0815 18:26:44.187575   64368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:26:44.187598   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:44.189998   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.190293   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.190322   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.190466   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:44.190712   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.190904   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:44.191064   64368 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:26:44.275738   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:26:44.298801   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:26:44.322708   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 18:26:44.345664   64368 provision.go:87] duration metric: took 731.75755ms to configureAuth
	I0815 18:26:44.345693   64368 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:26:44.345879   64368 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:26:44.345951   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:44.348519   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.348878   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.348897   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.349049   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:44.349241   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.349411   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.349536   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:44.349711   64368 main.go:141] libmachine: Using SSH client type: native
	I0815 18:26:44.349910   64368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:26:44.349934   64368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:26:44.618371   64368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:26:44.618399   64368 main.go:141] libmachine: Checking connection to Docker...
	I0815 18:26:44.618408   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetURL
	I0815 18:26:44.619722   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using libvirt version 6000000
	I0815 18:26:44.621759   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.622141   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.622173   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.622278   64368 main.go:141] libmachine: Docker is up and running!
	I0815 18:26:44.622299   64368 main.go:141] libmachine: Reticulating splines...
	I0815 18:26:44.622306   64368 client.go:171] duration metric: took 24.862411526s to LocalClient.Create
	I0815 18:26:44.622336   64368 start.go:167] duration metric: took 24.862501737s to libmachine.API.Create "old-k8s-version-278865"
	I0815 18:26:44.622345   64368 start.go:293] postStartSetup for "old-k8s-version-278865" (driver="kvm2")
	I0815 18:26:44.622354   64368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:26:44.622372   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:44.622625   64368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:26:44.622656   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:44.624769   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.625099   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.625126   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.625269   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:44.625451   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.625624   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:44.625791   64368 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:26:44.707742   64368 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:26:44.711983   64368 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:26:44.712010   64368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:26:44.712082   64368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:26:44.712189   64368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:26:44.712278   64368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:26:44.721962   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:26:44.745826   64368 start.go:296] duration metric: took 123.470495ms for postStartSetup
	I0815 18:26:44.745872   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetConfigRaw
	I0815 18:26:44.746401   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:26:44.748801   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.749223   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.749245   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.749591   64368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:26:44.749795   64368 start.go:128] duration metric: took 25.011607097s to createHost
	I0815 18:26:44.749819   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:44.752199   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.752643   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.752667   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.752833   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:44.753037   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.753188   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.753331   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:44.753492   64368 main.go:141] libmachine: Using SSH client type: native
	I0815 18:26:44.753656   64368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:26:44.753675   64368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:26:44.857401   64368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723746404.831725234
	
	I0815 18:26:44.857433   64368 fix.go:216] guest clock: 1723746404.831725234
	I0815 18:26:44.857444   64368 fix.go:229] Guest: 2024-08-15 18:26:44.831725234 +0000 UTC Remote: 2024-08-15 18:26:44.749808719 +0000 UTC m=+50.995480451 (delta=81.916515ms)
	I0815 18:26:44.857483   64368 fix.go:200] guest clock delta is within tolerance: 81.916515ms
	I0815 18:26:44.857491   64368 start.go:83] releasing machines lock for "old-k8s-version-278865", held for 25.119510908s
	I0815 18:26:44.857518   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:44.857805   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:26:44.860347   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.860781   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.860810   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.860957   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:44.861677   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:44.861892   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:44.861979   64368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:26:44.862025   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:44.862102   64368 ssh_runner.go:195] Run: cat /version.json
	I0815 18:26:44.862125   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:44.865296   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.865461   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.865682   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.865711   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.865863   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:44.865878   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.865906   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.866041   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.866055   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:44.866211   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.866215   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:44.866382   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:44.866383   64368 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:26:44.866560   64368 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:26:44.968627   64368 ssh_runner.go:195] Run: systemctl --version
	I0815 18:26:44.974805   64368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:26:45.138147   64368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:26:45.144313   64368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:26:45.144377   64368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:26:45.160056   64368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:26:45.160081   64368 start.go:495] detecting cgroup driver to use...
	I0815 18:26:45.160158   64368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:26:45.178269   64368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:26:45.193028   64368 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:26:45.193089   64368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:26:45.207037   64368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:26:45.226663   64368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:26:45.358346   64368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:26:45.506003   64368 docker.go:233] disabling docker service ...
	I0815 18:26:45.506076   64368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:26:45.527696   64368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:26:45.552024   64368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:26:45.701984   64368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:26:45.817868   64368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:26:45.832805   64368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:26:45.854229   64368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 18:26:45.854291   64368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:26:45.866467   64368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:26:45.866524   64368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:26:45.877712   64368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:26:45.890109   64368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:26:45.902120   64368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:26:45.914157   64368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:26:45.925673   64368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:26:45.925731   64368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:26:45.946835   64368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:26:45.958098   64368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:26:46.099956   64368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:26:46.255366   64368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:26:46.255441   64368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:26:46.260549   64368 start.go:563] Will wait 60s for crictl version
	I0815 18:26:46.260611   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:46.264202   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:26:46.308555   64368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:26:46.308649   64368 ssh_runner.go:195] Run: crio --version
	I0815 18:26:46.337148   64368 ssh_runner.go:195] Run: crio --version
	I0815 18:26:46.367888   64368 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 18:26:46.369308   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:26:46.372288   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:46.372697   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:46.372730   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:46.372922   64368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:26:46.377064   64368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:26:46.391250   64368 kubeadm.go:883] updating cluster {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:26:46.391386   64368 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:26:46.391451   64368 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:26:46.433479   64368 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:26:46.433561   64368 ssh_runner.go:195] Run: which lz4
	I0815 18:26:46.438432   64368 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:26:46.442642   64368 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:26:46.442666   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 18:26:48.132589   64368 crio.go:462] duration metric: took 1.694196037s to copy over tarball
	I0815 18:26:48.132674   64368 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:26:44.882083   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .Start
	I0815 18:26:44.882243   64827 main.go:141] libmachine: (stopped-upgrade-498665) Ensuring networks are active...
	I0815 18:26:44.883164   64827 main.go:141] libmachine: (stopped-upgrade-498665) Ensuring network default is active
	I0815 18:26:44.883534   64827 main.go:141] libmachine: (stopped-upgrade-498665) Ensuring network mk-stopped-upgrade-498665 is active
	I0815 18:26:44.884547   64827 main.go:141] libmachine: (stopped-upgrade-498665) Getting domain xml...
	I0815 18:26:44.884865   64827 main.go:141] libmachine: (stopped-upgrade-498665) Creating domain...
	I0815 18:26:46.200582   64827 main.go:141] libmachine: (stopped-upgrade-498665) Waiting to get IP...
	I0815 18:26:46.201437   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:26:46.201920   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | unable to find current IP address of domain stopped-upgrade-498665 in network mk-stopped-upgrade-498665
	I0815 18:26:46.201971   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | I0815 18:26:46.201893   65051 retry.go:31] will retry after 281.038728ms: waiting for machine to come up
	I0815 18:26:46.484328   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:26:46.484875   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | unable to find current IP address of domain stopped-upgrade-498665 in network mk-stopped-upgrade-498665
	I0815 18:26:46.484898   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | I0815 18:26:46.484829   65051 retry.go:31] will retry after 276.827953ms: waiting for machine to come up
	I0815 18:26:46.763288   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:26:46.763876   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | unable to find current IP address of domain stopped-upgrade-498665 in network mk-stopped-upgrade-498665
	I0815 18:26:46.763906   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | I0815 18:26:46.763836   65051 retry.go:31] will retry after 331.905884ms: waiting for machine to come up
	I0815 18:26:47.097403   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:26:47.097881   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | unable to find current IP address of domain stopped-upgrade-498665 in network mk-stopped-upgrade-498665
	I0815 18:26:47.097909   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | I0815 18:26:47.097829   65051 retry.go:31] will retry after 529.444748ms: waiting for machine to come up
	I0815 18:26:47.629179   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:26:47.629715   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | unable to find current IP address of domain stopped-upgrade-498665 in network mk-stopped-upgrade-498665
	I0815 18:26:47.629739   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | I0815 18:26:47.629632   65051 retry.go:31] will retry after 501.289073ms: waiting for machine to come up
	I0815 18:26:48.132455   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:26:48.133249   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | unable to find current IP address of domain stopped-upgrade-498665 in network mk-stopped-upgrade-498665
	I0815 18:26:48.133301   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | I0815 18:26:48.133214   65051 retry.go:31] will retry after 713.477599ms: waiting for machine to come up
	I0815 18:26:48.847896   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:26:48.848358   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | unable to find current IP address of domain stopped-upgrade-498665 in network mk-stopped-upgrade-498665
	I0815 18:26:48.848389   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | I0815 18:26:48.848279   65051 retry.go:31] will retry after 731.358859ms: waiting for machine to come up
	I0815 18:26:49.581252   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:26:49.581650   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | unable to find current IP address of domain stopped-upgrade-498665 in network mk-stopped-upgrade-498665
	I0815 18:26:49.581677   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | I0815 18:26:49.581605   65051 retry.go:31] will retry after 998.177733ms: waiting for machine to come up
	I0815 18:26:50.690623   64368 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.557914585s)
	I0815 18:26:50.690667   64368 crio.go:469] duration metric: took 2.558038185s to extract the tarball
	I0815 18:26:50.690678   64368 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:26:50.734827   64368 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:26:50.781327   64368 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:26:50.781351   64368 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:26:50.781472   64368 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:50.781494   64368 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:50.781509   64368 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 18:26:50.781522   64368 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 18:26:50.781438   64368 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:50.781548   64368 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:50.781438   64368 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:50.781440   64368 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:26:50.783531   64368 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:26:50.783595   64368 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:50.783606   64368 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:50.783621   64368 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:50.783542   64368 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 18:26:50.783549   64368 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:50.783676   64368 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 18:26:50.783567   64368 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:50.949780   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:50.960111   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 18:26:51.000741   64368 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 18:26:51.000796   64368 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:51.000846   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.008063   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:51.012373   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:51.012446   64368 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 18:26:51.012518   64368 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 18:26:51.012561   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.059855   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:26:51.059964   64368 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 18:26:51.060036   64368 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:51.060080   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:51.060116   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.097244   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 18:26:51.113511   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:26:51.113550   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:51.113604   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:51.137186   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:51.155409   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:51.173814   64368 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 18:26:51.173862   64368 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 18:26:51.173912   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.228022   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:26:51.238588   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:51.238597   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 18:26:51.278941   64368 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 18:26:51.278981   64368 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:51.279032   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.283071   64368 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 18:26:51.283118   64368 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:51.283172   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.283177   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:26:51.307891   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 18:26:51.320952   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:51.321007   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:51.321008   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:51.325798   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:51.353707   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:26:51.416632   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 18:26:51.435034   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:51.435089   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:51.459584   64368 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 18:26:51.459633   64368 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:51.459664   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:26:51.459679   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.513398   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:51.513440   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:51.530000   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:51.530050   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 18:26:51.586456   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 18:26:51.586462   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 18:26:51.597704   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:51.629930   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:51.674431   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 18:26:51.711237   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:26:51.863355   64368 cache_images.go:92] duration metric: took 1.081984361s to LoadCachedImages
	W0815 18:26:51.863460   64368 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0815 18:26:51.863476   64368 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.20.0 crio true true} ...
	I0815 18:26:51.863612   64368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-278865 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:26:51.863694   64368 ssh_runner.go:195] Run: crio config
	I0815 18:26:51.915393   64368 cni.go:84] Creating CNI manager for ""
	I0815 18:26:51.915466   64368 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:26:51.915482   64368 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:26:51.915509   64368 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-278865 NodeName:old-k8s-version-278865 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 18:26:51.915688   64368 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-278865"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:26:51.915758   64368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 18:26:51.930339   64368 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:26:51.930423   64368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:26:51.943602   64368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 18:26:51.960687   64368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:26:51.978511   64368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 18:26:51.996372   64368 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I0815 18:26:52.000262   64368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:26:52.013040   64368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:26:52.138671   64368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:26:52.157474   64368 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865 for IP: 192.168.39.89
	I0815 18:26:52.157508   64368 certs.go:194] generating shared ca certs ...
	I0815 18:26:52.157530   64368 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.157717   64368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:26:52.157775   64368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:26:52.157793   64368 certs.go:256] generating profile certs ...
	I0815 18:26:52.157870   64368 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.key
	I0815 18:26:52.157891   64368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt with IP's: []
	I0815 18:26:52.256783   64368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt ...
	I0815 18:26:52.256817   64368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: {Name:mk489eb0952cf53a915129fd288ab2fd07350a45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.257013   64368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.key ...
	I0815 18:26:52.257029   64368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.key: {Name:mke75b69e3e7b80a3685923312134ea2bd16478b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.257133   64368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a
	I0815 18:26:52.257157   64368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt.b00e3c1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.89]
	I0815 18:26:52.514942   64368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt.b00e3c1a ...
	I0815 18:26:52.514971   64368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt.b00e3c1a: {Name:mk71bef92184d414517f936910c6b02a23ca09b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.515125   64368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a ...
	I0815 18:26:52.515138   64368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a: {Name:mk37306c17621e8d5ca942be7928f51bd17080bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.515212   64368 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt.b00e3c1a -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt
	I0815 18:26:52.515294   64368 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key
	I0815 18:26:52.515345   64368 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key
	I0815 18:26:52.515360   64368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt with IP's: []
	I0815 18:26:52.594313   64368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt ...
	I0815 18:26:52.594340   64368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt: {Name:mk9a46182f3609e6c7e843c3472924b6ae54f09a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.594530   64368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key ...
	I0815 18:26:52.594547   64368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key: {Name:mk88cd43667643e2f89a51eb09f9690b55733f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.594747   64368 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:26:52.594785   64368 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:26:52.594795   64368 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:26:52.594822   64368 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:26:52.594844   64368 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:26:52.594866   64368 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:26:52.594902   64368 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:26:52.595486   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:26:52.628555   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:26:52.656996   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:26:52.685663   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:26:52.710069   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 18:26:52.735575   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:26:52.759775   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:26:52.785418   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:26:52.809846   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:26:52.836180   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:26:52.862384   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:26:52.890166   64368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:26:52.908973   64368 ssh_runner.go:195] Run: openssl version
	I0815 18:26:52.915186   64368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:26:52.927217   64368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:26:52.932155   64368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:26:52.932220   64368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:26:52.938718   64368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:26:52.950738   64368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:26:52.962656   64368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:26:52.967654   64368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:26:52.967723   64368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:26:52.974702   64368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:26:52.993677   64368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:26:53.013657   64368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:26:53.019869   64368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:26:53.019936   64368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:26:53.030433   64368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:26:53.053149   64368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:26:53.058133   64368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 18:26:53.058206   64368 kubeadm.go:392] StartCluster: {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:26:53.058320   64368 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:26:53.058382   64368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:26:53.102549   64368 cri.go:89] found id: ""
	I0815 18:26:53.102635   64368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:26:53.113775   64368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:26:53.124452   64368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:26:53.135251   64368 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:26:53.135277   64368 kubeadm.go:157] found existing configuration files:
	
	I0815 18:26:53.135332   64368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:26:53.145809   64368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:26:53.145875   64368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:26:53.155995   64368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:26:53.166137   64368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:26:53.166212   64368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:26:53.176057   64368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:26:53.187552   64368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:26:53.187628   64368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:26:53.199467   64368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:26:53.209707   64368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:26:53.209774   64368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:26:53.220118   64368 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:26:53.362998   64368 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:26:53.363125   64368 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:26:53.532702   64368 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:26:53.532876   64368 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:26:53.532987   64368 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:26:53.733641   64368 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:26:50.582277   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:26:50.582853   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | unable to find current IP address of domain stopped-upgrade-498665 in network mk-stopped-upgrade-498665
	I0815 18:26:50.582877   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | I0815 18:26:50.582829   65051 retry.go:31] will retry after 1.637536533s: waiting for machine to come up
	I0815 18:26:52.221654   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:26:52.222124   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | unable to find current IP address of domain stopped-upgrade-498665 in network mk-stopped-upgrade-498665
	I0815 18:26:52.222175   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | I0815 18:26:52.222088   65051 retry.go:31] will retry after 1.767325671s: waiting for machine to come up
	I0815 18:26:53.991485   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:26:53.992003   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | unable to find current IP address of domain stopped-upgrade-498665 in network mk-stopped-upgrade-498665
	I0815 18:26:53.992039   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | I0815 18:26:53.991969   65051 retry.go:31] will retry after 2.683899546s: waiting for machine to come up
	I0815 18:26:53.923997   64368 out.go:235]   - Generating certificates and keys ...
	I0815 18:26:53.924126   64368 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:26:53.924258   64368 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:26:53.945246   64368 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 18:26:54.037805   64368 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 18:26:54.212969   64368 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 18:26:54.386613   64368 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 18:26:54.622292   64368 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 18:26:54.622494   64368 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-278865] and IPs [192.168.39.89 127.0.0.1 ::1]
	I0815 18:26:54.764357   64368 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 18:26:54.764788   64368 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-278865] and IPs [192.168.39.89 127.0.0.1 ::1]
	I0815 18:26:54.952589   64368 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 18:26:55.225522   64368 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 18:26:55.650538   64368 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 18:26:55.650961   64368 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:26:55.762725   64368 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:26:56.074128   64368 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:26:56.262699   64368 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:26:56.452684   64368 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:26:56.468703   64368 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:26:56.469284   64368 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:26:56.469353   64368 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:26:56.630461   64368 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:26:56.632158   64368 out.go:235]   - Booting up control plane ...
	I0815 18:26:56.632301   64368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:26:56.640533   64368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:26:56.642342   64368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:26:56.644109   64368 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:26:56.650394   64368 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:26:56.677640   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:26:56.678277   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | unable to find current IP address of domain stopped-upgrade-498665 in network mk-stopped-upgrade-498665
	I0815 18:26:56.678315   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | I0815 18:26:56.678209   65051 retry.go:31] will retry after 2.205584757s: waiting for machine to come up
	I0815 18:26:58.885924   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:26:58.886427   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | unable to find current IP address of domain stopped-upgrade-498665 in network mk-stopped-upgrade-498665
	I0815 18:26:58.886471   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | I0815 18:26:58.886370   65051 retry.go:31] will retry after 3.913163581s: waiting for machine to come up
	I0815 18:27:04.180886   64974 start.go:364] duration metric: took 27.307920962s to acquireMachinesLock for "kubernetes-upgrade-729203"
	I0815 18:27:04.180946   64974 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:27:04.180962   64974 fix.go:54] fixHost starting: 
	I0815 18:27:04.181366   64974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:27:04.181409   64974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:27:04.201802   64974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35615
	I0815 18:27:04.202264   64974 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:27:04.202807   64974 main.go:141] libmachine: Using API Version  1
	I0815 18:27:04.202828   64974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:27:04.203166   64974 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:27:04.203347   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:27:04.203477   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetState
	I0815 18:27:04.205203   64974 fix.go:112] recreateIfNeeded on kubernetes-upgrade-729203: state=Running err=<nil>
	W0815 18:27:04.205225   64974 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:27:04.207262   64974 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-729203" VM ...
	I0815 18:27:02.801564   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:02.802083   64827 main.go:141] libmachine: (stopped-upgrade-498665) Found IP for machine: 192.168.72.80
	I0815 18:27:02.802112   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has current primary IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:02.802141   64827 main.go:141] libmachine: (stopped-upgrade-498665) Reserving static IP address...
	I0815 18:27:02.802497   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "stopped-upgrade-498665", mac: "52:54:00:86:94:de", ip: "192.168.72.80"} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:02.802523   64827 main.go:141] libmachine: (stopped-upgrade-498665) Reserved static IP address: 192.168.72.80
	I0815 18:27:02.802556   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | skip adding static IP to network mk-stopped-upgrade-498665 - found existing host DHCP lease matching {name: "stopped-upgrade-498665", mac: "52:54:00:86:94:de", ip: "192.168.72.80"}
	I0815 18:27:02.802574   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | Getting to WaitForSSH function...
	I0815 18:27:02.802591   64827 main.go:141] libmachine: (stopped-upgrade-498665) Waiting for SSH to be available...
	I0815 18:27:02.804622   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:02.804942   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:02.804971   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:02.805131   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | Using SSH client type: external
	I0815 18:27:02.805160   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/stopped-upgrade-498665/id_rsa (-rw-------)
	I0815 18:27:02.805194   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.80 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/stopped-upgrade-498665/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:27:02.805207   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | About to run SSH command:
	I0815 18:27:02.805228   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | exit 0
	I0815 18:27:02.892028   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | SSH cmd err, output: <nil>: 
	I0815 18:27:02.892401   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetConfigRaw
	I0815 18:27:02.893004   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetIP
	I0815 18:27:02.895409   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:02.895765   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:02.895793   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:02.896014   64827 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/config.json ...
	I0815 18:27:02.896222   64827 machine.go:93] provisionDockerMachine start ...
	I0815 18:27:02.896240   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .DriverName
	I0815 18:27:02.896417   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHHostname
	I0815 18:27:02.898486   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:02.898776   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:02.898811   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:02.898903   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHPort
	I0815 18:27:02.899077   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:02.899211   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:02.899344   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHUsername
	I0815 18:27:02.899561   64827 main.go:141] libmachine: Using SSH client type: native
	I0815 18:27:02.899757   64827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0815 18:27:02.899770   64827 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:27:03.008431   64827 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:27:03.008461   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetMachineName
	I0815 18:27:03.008737   64827 buildroot.go:166] provisioning hostname "stopped-upgrade-498665"
	I0815 18:27:03.008762   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetMachineName
	I0815 18:27:03.008947   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHHostname
	I0815 18:27:03.011981   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.012350   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:03.012392   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.012578   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHPort
	I0815 18:27:03.012763   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:03.012906   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:03.013064   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHUsername
	I0815 18:27:03.013259   64827 main.go:141] libmachine: Using SSH client type: native
	I0815 18:27:03.013461   64827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0815 18:27:03.013475   64827 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-498665 && echo "stopped-upgrade-498665" | sudo tee /etc/hostname
	I0815 18:27:03.144033   64827 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-498665
	
	I0815 18:27:03.144063   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHHostname
	I0815 18:27:03.146955   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.147322   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:03.147353   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.147533   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHPort
	I0815 18:27:03.147734   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:03.147931   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:03.148098   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHUsername
	I0815 18:27:03.148277   64827 main.go:141] libmachine: Using SSH client type: native
	I0815 18:27:03.148441   64827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0815 18:27:03.148459   64827 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-498665' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-498665/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-498665' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:27:03.264416   64827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:27:03.264442   64827 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:27:03.264460   64827 buildroot.go:174] setting up certificates
	I0815 18:27:03.264468   64827 provision.go:84] configureAuth start
	I0815 18:27:03.264477   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetMachineName
	I0815 18:27:03.264803   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetIP
	I0815 18:27:03.267422   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.267755   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:03.267786   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.267980   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHHostname
	I0815 18:27:03.270394   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.270720   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:03.270750   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.270883   64827 provision.go:143] copyHostCerts
	I0815 18:27:03.270939   64827 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:27:03.270955   64827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:27:03.271007   64827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:27:03.271107   64827 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:27:03.271115   64827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:27:03.271134   64827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:27:03.271213   64827 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:27:03.271222   64827 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:27:03.271242   64827 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:27:03.271315   64827 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-498665 san=[127.0.0.1 192.168.72.80 localhost minikube stopped-upgrade-498665]
	I0815 18:27:03.524781   64827 provision.go:177] copyRemoteCerts
	I0815 18:27:03.524837   64827 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:27:03.524862   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHHostname
	I0815 18:27:03.527286   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.527681   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:03.527722   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.527866   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHPort
	I0815 18:27:03.528070   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:03.528200   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHUsername
	I0815 18:27:03.528308   64827 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/stopped-upgrade-498665/id_rsa Username:docker}
	I0815 18:27:03.613017   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:27:03.634063   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 18:27:03.655485   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 18:27:03.676172   64827 provision.go:87] duration metric: took 411.691502ms to configureAuth
	I0815 18:27:03.676201   64827 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:27:03.676380   64827 config.go:182] Loaded profile config "stopped-upgrade-498665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0815 18:27:03.676457   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHHostname
	I0815 18:27:03.678978   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.679364   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:03.679398   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.679546   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHPort
	I0815 18:27:03.679767   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:03.679931   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:03.680113   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHUsername
	I0815 18:27:03.680265   64827 main.go:141] libmachine: Using SSH client type: native
	I0815 18:27:03.680514   64827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0815 18:27:03.680533   64827 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:27:03.944111   64827 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:27:03.944138   64827 machine.go:96] duration metric: took 1.047903585s to provisionDockerMachine
	I0815 18:27:03.944153   64827 start.go:293] postStartSetup for "stopped-upgrade-498665" (driver="kvm2")
	I0815 18:27:03.944175   64827 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:27:03.944212   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .DriverName
	I0815 18:27:03.944610   64827 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:27:03.944647   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHHostname
	I0815 18:27:03.947140   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.947470   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:03.947499   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:03.947686   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHPort
	I0815 18:27:03.947908   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:03.948073   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHUsername
	I0815 18:27:03.948373   64827 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/stopped-upgrade-498665/id_rsa Username:docker}
	I0815 18:27:04.034923   64827 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:27:04.038706   64827 info.go:137] Remote host: Buildroot 2021.02.12
	I0815 18:27:04.038729   64827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:27:04.038800   64827 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:27:04.038893   64827 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:27:04.039005   64827 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:27:04.046714   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:27:04.067350   64827 start.go:296] duration metric: took 123.176267ms for postStartSetup
	I0815 18:27:04.067385   64827 fix.go:56] duration metric: took 19.20975613s for fixHost
	I0815 18:27:04.067405   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHHostname
	I0815 18:27:04.070054   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:04.070409   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:04.070438   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:04.070554   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHPort
	I0815 18:27:04.070769   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:04.070950   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:04.071143   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHUsername
	I0815 18:27:04.071389   64827 main.go:141] libmachine: Using SSH client type: native
	I0815 18:27:04.071591   64827 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.80 22 <nil> <nil>}
	I0815 18:27:04.071605   64827 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:27:04.180736   64827 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723746424.130842079
	
	I0815 18:27:04.180755   64827 fix.go:216] guest clock: 1723746424.130842079
	I0815 18:27:04.180764   64827 fix.go:229] Guest: 2024-08-15 18:27:04.130842079 +0000 UTC Remote: 2024-08-15 18:27:04.06738917 +0000 UTC m=+34.459642547 (delta=63.452909ms)
	I0815 18:27:04.180788   64827 fix.go:200] guest clock delta is within tolerance: 63.452909ms
	I0815 18:27:04.180811   64827 start.go:83] releasing machines lock for "stopped-upgrade-498665", held for 19.323190204s
	I0815 18:27:04.180844   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .DriverName
	I0815 18:27:04.181090   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetIP
	I0815 18:27:04.183944   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:04.184362   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:04.184391   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:04.184540   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .DriverName
	I0815 18:27:04.184993   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .DriverName
	I0815 18:27:04.185175   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .DriverName
	I0815 18:27:04.185270   64827 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:27:04.185314   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHHostname
	I0815 18:27:04.185423   64827 ssh_runner.go:195] Run: cat /version.json
	I0815 18:27:04.185448   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHHostname
	I0815 18:27:04.187856   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:04.188268   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:04.188321   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:04.188343   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:04.188564   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHPort
	I0815 18:27:04.188761   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:04.188906   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:04.188932   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:04.188964   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHUsername
	I0815 18:27:04.189117   64827 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/stopped-upgrade-498665/id_rsa Username:docker}
	I0815 18:27:04.189157   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHPort
	I0815 18:27:04.189366   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHKeyPath
	I0815 18:27:04.189693   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetSSHUsername
	I0815 18:27:04.189849   64827 sshutil.go:53] new ssh client: &{IP:192.168.72.80 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/stopped-upgrade-498665/id_rsa Username:docker}
	W0815 18:27:04.293553   64827 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0815 18:27:04.293638   64827 ssh_runner.go:195] Run: systemctl --version
	I0815 18:27:04.298881   64827 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:27:04.440845   64827 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:27:04.447864   64827 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:27:04.447927   64827 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:27:04.461009   64827 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:27:04.461033   64827 start.go:495] detecting cgroup driver to use...
	I0815 18:27:04.461121   64827 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:27:04.475975   64827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:27:04.487736   64827 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:27:04.487798   64827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:27:04.501141   64827 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:27:04.512706   64827 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:27:04.617440   64827 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:27:04.739148   64827 docker.go:233] disabling docker service ...
	I0815 18:27:04.739218   64827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:27:04.751520   64827 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:27:04.762701   64827 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:27:04.885864   64827 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:27:05.002439   64827 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:27:05.014104   64827 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:27:05.029155   64827 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0815 18:27:05.029221   64827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:05.037487   64827 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:27:05.037540   64827 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:05.047192   64827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:05.058965   64827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:05.070681   64827 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:27:05.080548   64827 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:05.089060   64827 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:05.106942   64827 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:05.116698   64827 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:27:05.126967   64827 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:27:05.127036   64827 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:27:05.139259   64827 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:27:05.147915   64827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:27:05.264654   64827 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:27:05.400526   64827 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:27:05.400612   64827 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:27:05.404870   64827 start.go:563] Will wait 60s for crictl version
	I0815 18:27:05.404912   64827 ssh_runner.go:195] Run: which crictl
	I0815 18:27:05.409857   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:27:05.442130   64827 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.22.3
	RuntimeApiVersion:  v1alpha2
	I0815 18:27:05.442200   64827 ssh_runner.go:195] Run: crio --version
	I0815 18:27:05.479004   64827 ssh_runner.go:195] Run: crio --version
	I0815 18:27:05.513198   64827 out.go:177] * Preparing Kubernetes v1.24.1 on CRI-O 1.22.3 ...
	I0815 18:27:04.208559   64974 machine.go:93] provisionDockerMachine start ...
	I0815 18:27:04.208591   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:27:04.208764   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:27:04.210998   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.211453   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:26:10 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:27:04.211493   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.211618   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:27:04.211780   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:04.211938   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:04.212086   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:27:04.212273   64974 main.go:141] libmachine: Using SSH client type: native
	I0815 18:27:04.212478   64974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0815 18:27:04.212526   64974 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:27:04.317532   64974 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-729203
	
	I0815 18:27:04.317560   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetMachineName
	I0815 18:27:04.317785   64974 buildroot.go:166] provisioning hostname "kubernetes-upgrade-729203"
	I0815 18:27:04.317814   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetMachineName
	I0815 18:27:04.317991   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:27:04.320889   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.321425   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:26:10 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:27:04.321470   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.321630   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:27:04.321839   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:04.322049   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:04.322197   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:27:04.322435   64974 main.go:141] libmachine: Using SSH client type: native
	I0815 18:27:04.322597   64974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0815 18:27:04.322609   64974 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-729203 && echo "kubernetes-upgrade-729203" | sudo tee /etc/hostname
	I0815 18:27:04.438818   64974 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-729203
	
	I0815 18:27:04.438853   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:27:04.442070   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.442504   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:26:10 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:27:04.442534   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.442756   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:27:04.442930   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:04.443100   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:04.443247   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:27:04.443437   64974 main.go:141] libmachine: Using SSH client type: native
	I0815 18:27:04.443653   64974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0815 18:27:04.443677   64974 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-729203' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-729203/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-729203' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:27:04.549700   64974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:27:04.549750   64974 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:27:04.549783   64974 buildroot.go:174] setting up certificates
	I0815 18:27:04.549797   64974 provision.go:84] configureAuth start
	I0815 18:27:04.549814   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetMachineName
	I0815 18:27:04.550180   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetIP
	I0815 18:27:04.553204   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.553642   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:26:10 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:27:04.553665   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.553870   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:27:04.556047   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.556443   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:26:10 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:27:04.556475   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.556609   64974 provision.go:143] copyHostCerts
	I0815 18:27:04.556656   64974 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:27:04.556692   64974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:27:04.556754   64974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:27:04.556877   64974 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:27:04.556890   64974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:27:04.556917   64974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:27:04.556986   64974 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:27:04.556996   64974 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:27:04.557019   64974 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:27:04.557082   64974 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-729203 san=[127.0.0.1 192.168.50.3 kubernetes-upgrade-729203 localhost minikube]
	I0815 18:27:04.762180   64974 provision.go:177] copyRemoteCerts
	I0815 18:27:04.762254   64974 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:27:04.762288   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:27:04.765391   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.765803   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:26:10 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:27:04.765830   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.765992   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:27:04.766164   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:04.766317   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:27:04.766429   64974 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/id_rsa Username:docker}
	I0815 18:27:04.851494   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:27:04.878394   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0815 18:27:04.905653   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 18:27:04.936723   64974 provision.go:87] duration metric: took 386.909313ms to configureAuth
	I0815 18:27:04.936762   64974 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:27:04.936981   64974 config.go:182] Loaded profile config "kubernetes-upgrade-729203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:27:04.937077   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:27:04.939896   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.940305   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:26:10 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:27:04.940349   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:04.940467   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:27:04.940708   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:04.940869   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:04.941017   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:27:04.941187   64974 main.go:141] libmachine: Using SSH client type: native
	I0815 18:27:04.941358   64974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0815 18:27:04.941386   64974 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:27:05.514493   64827 main.go:141] libmachine: (stopped-upgrade-498665) Calling .GetIP
	I0815 18:27:05.517333   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:05.517720   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:86:94:de", ip: ""} in network mk-stopped-upgrade-498665: {Iface:virbr1 ExpiryTime:2024-08-15 19:26:55 +0000 UTC Type:0 Mac:52:54:00:86:94:de Iaid: IPaddr:192.168.72.80 Prefix:24 Hostname:stopped-upgrade-498665 Clientid:01:52:54:00:86:94:de}
	I0815 18:27:05.517750   64827 main.go:141] libmachine: (stopped-upgrade-498665) DBG | domain stopped-upgrade-498665 has defined IP address 192.168.72.80 and MAC address 52:54:00:86:94:de in network mk-stopped-upgrade-498665
	I0815 18:27:05.517926   64827 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 18:27:05.521667   64827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:27:05.531929   64827 kubeadm.go:883] updating cluster {Name:stopped-upgrade-498665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stop
ped-upgrade-498665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0815 18:27:05.532030   64827 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0815 18:27:05.532078   64827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:27:05.564775   64827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0815 18:27:05.564838   64827 ssh_runner.go:195] Run: which lz4
	I0815 18:27:05.568242   64827 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:27:05.572270   64827 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:27:05.572298   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (496813465 bytes)
	I0815 18:27:07.157642   64827 crio.go:462] duration metric: took 1.589448203s to copy over tarball
	I0815 18:27:07.157712   64827 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:27:11.108692   64974 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:27:11.108719   64974 machine.go:96] duration metric: took 6.900145943s to provisionDockerMachine
	I0815 18:27:11.108733   64974 start.go:293] postStartSetup for "kubernetes-upgrade-729203" (driver="kvm2")
	I0815 18:27:11.108745   64974 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:27:11.108767   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:27:11.109114   64974 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:27:11.109146   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:27:11.112581   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:11.113050   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:26:10 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:27:11.113088   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:11.113392   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:27:11.113589   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:11.113760   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:27:11.113891   64974 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/id_rsa Username:docker}
	I0815 18:27:11.200689   64974 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:27:11.206632   64974 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:27:11.206661   64974 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:27:11.206738   64974 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:27:11.206823   64974 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:27:11.206927   64974 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:27:11.221755   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:27:11.256458   64974 start.go:296] duration metric: took 147.711256ms for postStartSetup
	I0815 18:27:11.256534   64974 fix.go:56] duration metric: took 7.075579208s for fixHost
	I0815 18:27:11.256562   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:27:11.259930   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:11.260366   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:26:10 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:27:11.260398   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:11.260605   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:27:11.260841   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:11.261116   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:11.261309   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:27:11.261515   64974 main.go:141] libmachine: Using SSH client type: native
	I0815 18:27:11.261730   64974 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.3 22 <nil> <nil>}
	I0815 18:27:11.261752   64974 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:27:11.381557   64974 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723746431.374241246
	
	I0815 18:27:11.381580   64974 fix.go:216] guest clock: 1723746431.374241246
	I0815 18:27:11.381590   64974 fix.go:229] Guest: 2024-08-15 18:27:11.374241246 +0000 UTC Remote: 2024-08-15 18:27:11.256541703 +0000 UTC m=+34.519139191 (delta=117.699543ms)
	I0815 18:27:11.381614   64974 fix.go:200] guest clock delta is within tolerance: 117.699543ms
	I0815 18:27:11.381634   64974 start.go:83] releasing machines lock for "kubernetes-upgrade-729203", held for 7.200695819s
	I0815 18:27:11.381661   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:27:11.381921   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetIP
	I0815 18:27:11.384333   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:11.384663   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:26:10 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:27:11.384687   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:11.384831   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:27:11.385312   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:27:11.385467   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .DriverName
	I0815 18:27:11.385569   64974 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:27:11.385616   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:27:11.385679   64974 ssh_runner.go:195] Run: cat /version.json
	I0815 18:27:11.385701   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHHostname
	I0815 18:27:11.388192   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:11.388366   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:11.388567   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:26:10 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:27:11.388597   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:11.388733   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:26:10 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:27:11.388762   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:11.388764   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:27:11.388933   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHPort
	I0815 18:27:11.388998   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:11.389113   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHKeyPath
	I0815 18:27:11.389187   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:27:11.389294   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetSSHUsername
	I0815 18:27:11.389512   64974 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/id_rsa Username:docker}
	I0815 18:27:11.389512   64974 sshutil.go:53] new ssh client: &{IP:192.168.50.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kubernetes-upgrade-729203/id_rsa Username:docker}
	I0815 18:27:11.486864   64974 ssh_runner.go:195] Run: systemctl --version
	I0815 18:27:11.493320   64974 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:27:11.650059   64974 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:27:11.656186   64974 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:27:11.656249   64974 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:27:11.666203   64974 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 18:27:11.666225   64974 start.go:495] detecting cgroup driver to use...
	I0815 18:27:11.666275   64974 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:27:11.682886   64974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:27:11.697950   64974 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:27:11.697999   64974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:27:11.711679   64974 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:27:11.726486   64974 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:27:10.041249   64827 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.883489348s)
	I0815 18:27:10.041279   64827 crio.go:469] duration metric: took 2.88361349s to extract the tarball
	I0815 18:27:10.041288   64827 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:27:10.083089   64827 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:27:10.116589   64827 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.1". assuming images are not preloaded.
	I0815 18:27:10.116619   64827 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:27:10.116669   64827 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:27:10.116706   64827 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 18:27:10.116745   64827 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0815 18:27:10.116773   64827 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:27:10.116783   64827 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 18:27:10.116935   64827 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 18:27:10.116971   64827 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 18:27:10.116728   64827 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0815 18:27:10.118076   64827 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 18:27:10.118102   64827 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0815 18:27:10.118142   64827 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0815 18:27:10.118168   64827 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:27:10.118216   64827 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 18:27:10.118240   64827 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 18:27:10.118238   64827 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 18:27:10.118292   64827 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:27:10.276141   64827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0815 18:27:10.282018   64827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0815 18:27:10.283009   64827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0815 18:27:10.296049   64827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 18:27:10.309968   64827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0815 18:27:10.323096   64827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:27:10.370788   64827 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0815 18:27:10.370880   64827 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0815 18:27:10.370890   64827 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0815 18:27:10.370915   64827 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0815 18:27:10.370928   64827 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "e9f4b425f9192c11c0fa338cabe04f832aa5cea6dcbba2d1bd2a931224421693" in container runtime
	I0815 18:27:10.370952   64827 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0815 18:27:10.370966   64827 ssh_runner.go:195] Run: which crictl
	I0815 18:27:10.370993   64827 ssh_runner.go:195] Run: which crictl
	I0815 18:27:10.371600   64827 ssh_runner.go:195] Run: which crictl
	I0815 18:27:10.393184   64827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0815 18:27:10.439844   64827 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "b4ea7e648530d171b38f67305e22caf49f9d968d71c558e663709b805076538d" in container runtime
	I0815 18:27:10.439896   64827 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 18:27:10.439904   64827 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0815 18:27:10.439934   64827 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:27:10.439844   64827 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "18688a72645c5d34e1cc70d8deb5bef4fc6c9073bb1b53c7812856afc1de1237" in container runtime
	I0815 18:27:10.439981   64827 ssh_runner.go:195] Run: which crictl
	I0815 18:27:10.439993   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0815 18:27:10.440002   64827 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0815 18:27:10.440024   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0815 18:27:10.440035   64827 ssh_runner.go:195] Run: which crictl
	I0815 18:27:10.439952   64827 ssh_runner.go:195] Run: which crictl
	I0815 18:27:10.440062   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0815 18:27:10.474550   64827 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "beb86f5d8e6cd2234ca24649b74bd10e1e12446764560a3804d85dd6815d0a18" in container runtime
	I0815 18:27:10.474602   64827 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0815 18:27:10.474652   64827 ssh_runner.go:195] Run: which crictl
	I0815 18:27:10.503459   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0815 18:27:10.503471   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 18:27:10.503508   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:27:10.503587   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0815 18:27:10.503587   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0815 18:27:10.503672   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0815 18:27:10.503690   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0815 18:27:10.615478   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:27:10.615550   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0815 18:27:10.615594   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0815 18:27:10.615653   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 18:27:10.620067   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0815 18:27:10.620105   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0815 18:27:10.620158   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0815 18:27:10.737890   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0815 18:27:10.737924   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0815 18:27:10.737984   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.1
	I0815 18:27:10.738057   64827 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0815 18:27:10.738079   64827 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0815 18:27:10.738157   64827 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0815 18:27:10.738191   64827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0815 18:27:10.738178   64827 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.1
	I0815 18:27:10.738255   64827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0815 18:27:10.806555   64827 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.1
	I0815 18:27:10.806597   64827 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0815 18:27:10.806638   64827 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0815 18:27:10.806686   64827 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.1
	I0815 18:27:10.806692   64827 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0815 18:27:10.806720   64827 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0815 18:27:10.806733   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (102146048 bytes)
	I0815 18:27:10.806778   64827 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0815 18:27:10.806792   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (311296 bytes)
	I0815 18:27:10.830873   64827 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0815 18:27:10.830913   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (13586432 bytes)
	I0815 18:27:10.860469   64827 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0815 18:27:10.860562   64827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0815 18:27:11.126444   64827 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:27:13.810936   64827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.7: (2.95034772s)
	I0815 18:27:13.810969   64827 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0815 18:27:13.810977   64827 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.684504097s)
	I0815 18:27:13.811002   64827 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0815 18:27:13.811060   64827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0815 18:27:14.158025   64827 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0815 18:27:14.158066   64827 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0815 18:27:14.158149   64827 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0815 18:27:11.869809   64974 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:27:12.008998   64974 docker.go:233] disabling docker service ...
	I0815 18:27:12.009078   64974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:27:12.025896   64974 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:27:12.039459   64974 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:27:12.174267   64974 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:27:12.313317   64974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:27:12.328274   64974 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:27:12.349855   64974 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:27:12.349925   64974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:12.360508   64974 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:27:12.360581   64974 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:12.371009   64974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:12.381709   64974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:12.394157   64974 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:27:12.405556   64974 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:12.416616   64974 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:12.429580   64974 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:27:12.441617   64974 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:27:12.451132   64974 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:27:12.460472   64974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:27:12.600810   64974 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:27:18.766138   64974 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.165287179s)
	I0815 18:27:18.766172   64974 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:27:18.766224   64974 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:27:18.771802   64974 start.go:563] Will wait 60s for crictl version
	I0815 18:27:18.771863   64974 ssh_runner.go:195] Run: which crictl
	I0815 18:27:18.777067   64974 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:27:18.828350   64974 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:27:18.828441   64974 ssh_runner.go:195] Run: crio --version
	I0815 18:27:18.861441   64974 ssh_runner.go:195] Run: crio --version
	I0815 18:27:18.903932   64974 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:27:16.308569   64827 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.150394958s)
	I0815 18:27:16.308608   64827 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0815 18:27:16.308647   64827 cache_images.go:92] duration metric: took 6.19201369s to LoadCachedImages
	W0815 18:27:16.308735   64827 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0815 18:27:16.308754   64827 kubeadm.go:934] updating node { 192.168.72.80 8443 v1.24.1 crio true true} ...
	I0815 18:27:16.308872   64827 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=stopped-upgrade-498665 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-498665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:27:16.308967   64827 ssh_runner.go:195] Run: crio config
	I0815 18:27:16.346883   64827 cni.go:84] Creating CNI manager for ""
	I0815 18:27:16.346901   64827 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:27:16.346911   64827 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:27:16.346927   64827 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.80 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-498665 NodeName:stopped-upgrade-498665 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:27:16.347065   64827 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "stopped-upgrade-498665"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:27:16.347144   64827 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0815 18:27:16.355640   64827 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:27:16.355696   64827 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:27:16.363498   64827 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0815 18:27:16.377422   64827 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:27:16.391151   64827 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0815 18:27:16.405468   64827 ssh_runner.go:195] Run: grep 192.168.72.80	control-plane.minikube.internal$ /etc/hosts
	I0815 18:27:16.408825   64827 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.80	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:27:16.418993   64827 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:27:16.529813   64827 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:27:16.544119   64827 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665 for IP: 192.168.72.80
	I0815 18:27:16.544142   64827 certs.go:194] generating shared ca certs ...
	I0815 18:27:16.544163   64827 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:27:16.544323   64827 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:27:16.544401   64827 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:27:16.544424   64827 certs.go:256] generating profile certs ...
	I0815 18:27:16.544559   64827 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/client.key
	I0815 18:27:16.544594   64827 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/apiserver.key.1f745e80
	I0815 18:27:16.544612   64827 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/apiserver.crt.1f745e80 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.80]
	I0815 18:27:16.728817   64827 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/apiserver.crt.1f745e80 ...
	I0815 18:27:16.728844   64827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/apiserver.crt.1f745e80: {Name:mk326951efa83ae198736a930112bd415bc83b74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:27:16.729031   64827 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/apiserver.key.1f745e80 ...
	I0815 18:27:16.729049   64827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/apiserver.key.1f745e80: {Name:mk91388b4f1a1e1d3d8c8f51e3fbaa72a3555b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:27:16.729158   64827 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/apiserver.crt.1f745e80 -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/apiserver.crt
	I0815 18:27:16.729333   64827 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/apiserver.key.1f745e80 -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/apiserver.key
	I0815 18:27:16.729527   64827 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/proxy-client.key
	I0815 18:27:16.729676   64827 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:27:16.729718   64827 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:27:16.729732   64827 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:27:16.729765   64827 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:27:16.729813   64827 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:27:16.729850   64827 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:27:16.729906   64827 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:27:16.730494   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:27:16.768337   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:27:16.788080   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:27:16.807756   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:27:16.827478   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 18:27:16.847793   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 18:27:16.866974   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:27:16.885995   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 18:27:16.905091   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:27:16.923728   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:27:16.942532   64827 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:27:16.961421   64827 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:27:16.975577   64827 ssh_runner.go:195] Run: openssl version
	I0815 18:27:16.980610   64827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:27:16.989513   64827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:27:16.993458   64827 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:27:16.993500   64827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:27:16.998657   64827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:27:17.007561   64827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:27:17.016437   64827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:27:17.020513   64827 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:27:17.020559   64827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:27:17.025407   64827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:27:17.034241   64827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:27:17.042913   64827 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:27:17.046738   64827 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:27:17.046790   64827 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:27:17.051654   64827 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:27:17.060493   64827 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:27:17.064774   64827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:27:17.070261   64827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:27:17.075591   64827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:27:17.081143   64827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:27:17.086332   64827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:27:17.091480   64827 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:27:17.096894   64827 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-498665 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped
-upgrade-498665 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.80 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0815 18:27:17.096998   64827 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:27:17.097056   64827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:27:17.130098   64827 cri.go:89] found id: ""
	I0815 18:27:17.130161   64827 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0815 18:27:17.139990   64827 kubeadm.go:405] apiserver tunnel failed: apiserver port not set
	I0815 18:27:17.140010   64827 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:27:17.140015   64827 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:27:17.140050   64827 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:27:17.149419   64827 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:27:17.150084   64827 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-498665" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:27:17.150426   64827 kubeconfig.go:62] /home/jenkins/minikube-integration/19450-13013/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-498665" cluster setting kubeconfig missing "stopped-upgrade-498665" context setting]
	I0815 18:27:17.150937   64827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:27:17.151743   64827 kapi.go:59] client config for stopped-upgrade-498665: &rest.Config{Host:"https://192.168.72.80:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/client.crt", KeyFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/profiles/stopped-upgrade-498665/client.key", CAFile:"/home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f189a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0815 18:27:17.152399   64827 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:27:17.161779   64827 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/crio/crio.sock
	+  criSocket: unix:///var/run/crio/crio.sock
	   name: "stopped-upgrade-498665"
	   kubeletExtraArgs:
	     node-ip: 192.168.72.80
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0815 18:27:17.161793   64827 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:27:17.161806   64827 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:27:17.161853   64827 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:27:17.193570   64827 cri.go:89] found id: ""
	I0815 18:27:17.193635   64827 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:27:17.207290   64827 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:27:17.215550   64827 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:27:17.216035   64827 kubeadm.go:157] found existing configuration files:
	
	I0815 18:27:17.216087   64827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf
	I0815 18:27:17.223762   64827 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:27:17.223809   64827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:27:17.231660   64827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf
	I0815 18:27:17.239095   64827 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:27:17.239159   64827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:27:17.247272   64827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf
	I0815 18:27:17.254674   64827 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:27:17.254721   64827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:27:17.262618   64827 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf
	I0815 18:27:17.269867   64827 kubeadm.go:163] "https://control-plane.minikube.internal:0" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:0 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:27:17.269913   64827 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:27:17.277682   64827 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:27:17.285570   64827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:27:17.377356   64827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:27:18.368841   64827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:27:18.609243   64827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:27:18.685881   64827 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:27:18.752944   64827 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:27:18.753030   64827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:27:19.253189   64827 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:27:18.905305   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) Calling .GetIP
	I0815 18:27:18.907965   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:18.908275   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b9:2e:4c", ip: ""} in network mk-kubernetes-upgrade-729203: {Iface:virbr2 ExpiryTime:2024-08-15 19:26:10 +0000 UTC Type:0 Mac:52:54:00:b9:2e:4c Iaid: IPaddr:192.168.50.3 Prefix:24 Hostname:kubernetes-upgrade-729203 Clientid:01:52:54:00:b9:2e:4c}
	I0815 18:27:18.908306   64974 main.go:141] libmachine: (kubernetes-upgrade-729203) DBG | domain kubernetes-upgrade-729203 has defined IP address 192.168.50.3 and MAC address 52:54:00:b9:2e:4c in network mk-kubernetes-upgrade-729203
	I0815 18:27:18.908629   64974 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 18:27:18.912846   64974 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-729203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:kubernetes-upgrade-729203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:27:18.912942   64974 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:27:18.912998   64974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:27:18.954380   64974 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:27:18.954415   64974 crio.go:433] Images already preloaded, skipping extraction
	I0815 18:27:18.954479   64974 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:27:18.989287   64974 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:27:18.989309   64974 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:27:18.989317   64974 kubeadm.go:934] updating node { 192.168.50.3 8443 v1.31.0 crio true true} ...
	I0815 18:27:18.989439   64974 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-729203 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-729203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:27:18.989518   64974 ssh_runner.go:195] Run: crio config
	I0815 18:27:19.042691   64974 cni.go:84] Creating CNI manager for ""
	I0815 18:27:19.042711   64974 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:27:19.042721   64974 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:27:19.042747   64974 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.3 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-729203 NodeName:kubernetes-upgrade-729203 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:27:19.042910   64974 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-729203"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:27:19.042990   64974 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:27:19.053049   64974 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:27:19.053104   64974 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:27:19.062798   64974 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I0815 18:27:19.080086   64974 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:27:19.097110   64974 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0815 18:27:19.115083   64974 ssh_runner.go:195] Run: grep 192.168.50.3	control-plane.minikube.internal$ /etc/hosts
	I0815 18:27:19.119052   64974 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:27:19.255109   64974 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:27:19.271966   64974 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203 for IP: 192.168.50.3
	I0815 18:27:19.271986   64974 certs.go:194] generating shared ca certs ...
	I0815 18:27:19.272002   64974 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:27:19.272148   64974 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:27:19.272200   64974 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:27:19.272209   64974 certs.go:256] generating profile certs ...
	I0815 18:27:19.272302   64974 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/client.key
	I0815 18:27:19.272368   64974 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.key.a6902cfa
	I0815 18:27:19.272443   64974 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/proxy-client.key
	I0815 18:27:19.272593   64974 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:27:19.272626   64974 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:27:19.272635   64974 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:27:19.272657   64974 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:27:19.272679   64974 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:27:19.272701   64974 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:27:19.272746   64974 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:27:19.273365   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:27:19.301927   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:27:19.327485   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:27:19.354175   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:27:19.386951   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0815 18:27:19.416264   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 18:27:19.442110   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:27:19.477439   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kubernetes-upgrade-729203/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 18:27:19.510525   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:27:19.538238   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:27:19.568230   64974 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:27:19.593242   64974 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:27:19.612895   64974 ssh_runner.go:195] Run: openssl version
	I0815 18:27:19.618655   64974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:27:19.628958   64974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:27:19.633454   64974 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:27:19.633512   64974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:27:19.640798   64974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:27:19.653956   64974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:27:19.667993   64974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:27:19.674198   64974 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:27:19.674252   64974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:27:19.681603   64974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:27:19.691770   64974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:27:19.702550   64974 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:27:19.707352   64974 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:27:19.707408   64974 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:27:19.715006   64974 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:27:19.725298   64974 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:27:19.729972   64974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:27:19.735633   64974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:27:19.741366   64974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:27:19.747109   64974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:27:19.753239   64974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:27:19.759106   64974 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:27:19.766273   64974 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-729203 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.0 ClusterName:kubernetes-upgrade-729203 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.3 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:27:19.766363   64974 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:27:19.766429   64974 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:27:19.816028   64974 cri.go:89] found id: "ec7a92d335315da30ffbeee32bf5b5c46deb0b047a678f6748833363dcb826ff"
	I0815 18:27:19.816056   64974 cri.go:89] found id: "f42cea24e6e54db8c5a485ef9d76d265c7481a8c328422968af214cccd40161d"
	I0815 18:27:19.816062   64974 cri.go:89] found id: "f9bfbc889c118c15ffba1303b3981a89883fbd3804d655a071311a494d2e9f46"
	I0815 18:27:19.816069   64974 cri.go:89] found id: "22ec879a08c436eca3ce8776cc57a71ff1727db697f8482308cd9f325e35ac0a"
	I0815 18:27:19.816090   64974 cri.go:89] found id: "13eeff30a3d9be1db9be77a1166f0dc20b3e003c95c03705e01f1b08705c903a"
	I0815 18:27:19.816097   64974 cri.go:89] found id: "37e0b62c365765e61d05a452d7588a6df2201d12606b3a46c0ae2d3e272f310c"
	I0815 18:27:19.816101   64974 cri.go:89] found id: "4712b566b3c2c3c85c6c952e9b55eeeab21aad7c15ae9150d0ca346a7e5e0b5d"
	I0815 18:27:19.816104   64974 cri.go:89] found id: "dc09804a83f7626d2d6dad836e97bae9e089b9680fbca6fb33967f8bdf46fcb8"
	I0815 18:27:19.816108   64974 cri.go:89] found id: ""
	I0815 18:27:19.816162   64974 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.847571209Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746450847550011,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d60ec66c-f111-4673-b302-3b958cba190d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.848256114Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2ea30880-9fef-42de-b460-7322a62eb9d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.848313524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2ea30880-9fef-42de-b460-7322a62eb9d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.856196255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:284f46d77a7f8e094206cc490be55c1d5bab4306dd05d0c8b7368938726f2765,PodSandboxId:0bb469e3f4ba8e0416b5f488602a929170bec9da8b647ab95c8ae660114fac8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723746448529905950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7c5q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8540d2e6-913d-4de2-af50-03eb55031f3d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b70c751e3d95686ed1436ab34181c3a4b37175bf60af65ba1f502fdbce3fd8d3,PodSandboxId:dcbc9e01f9917d3511939ea2f8383a0232433a95fe0b313117ecc11cf7cc0ed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723746448312081485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jcm9h,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: dd1b2a93-a0db-4dfb-af2b-19a00b42c1c8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57393cab224baf9e7d8bb78f5889d5da3d4b2be22e7bea03ca1ab3795bcf088c,PodSandboxId:d3b7e0aa64ad5c21dd42033f16909052a3ecb372cd548b7b95008ecb51fdf829,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1723746447703527448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d32d52-6d12-42c5-aff3-30aef5acba7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1de5bbed5d511f2dff53ccd845e9a6b0f2cbcb56ee1b58273463b1f34445e4,PodSandboxId:2d9996348e0620f889209490d78b6d4ef3a52f649a8b2f0f5e86e1731a33dcd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,C
reatedAt:1723746447657069495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxfhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 571065a8-2d7a-4303-a00a-ba3bf8bd4cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a0fc8bcd8bce72118f06a33747c0615ec59061c8f88240828c808175027ea4,PodSandboxId:da00bd1d79bbab345845dd1973baef346daf7265838185dadd8d1ccf36afe7b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723746442813266771,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdd287c739b315708c51907e5b7704f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b135c35f0f8c88d8d6797ae30171b233757aaee21561762ae6a1c22854597a,PodSandboxId:607ea6a455c6a3b5fe72468b1e25eae45bb4721ac6e6cfb5f4ee466ddff1ccff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723746442765636489,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36a3500532eb27461a1af8a3b2fe8bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8607e2511f554594f9a2c5ba3b2026ee9eee4ce5ab7e635c3cfa8d10eb6153a3,PodSandboxId:697e4a0f915c96c11dcfea0151c3934baff649d0d62960edf806f7b1578e9d4c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723746442738220159,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198445ae29b67d19beef65cc41bcd878,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d29404a5991f500f52e8909ac92aa4ed29ea8bdf087ac506598db24878e16d34,PodSandboxId:d71f20a85ac3270d03450ef941ba927d5b70f11efff2c0a5c5c861d8727c57ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723746442727989444,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ae7921019bee4226962e30879472e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7a92d335315da30ffbeee32bf5b5c46deb0b047a678f6748833363dcb826ff,PodSandboxId:22afee0f9254269fda706ffdec3eba434cfdff5a17ccfc7558d12bbd39661409,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723746400544185526,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxfhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 571065a8-2d7a-4303-a00a-ba3bf8bd4cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42cea24e6e54db8c5a485ef9d76d265c7481a8c328422968af214cccd40161d,PodSandboxId:90353ab3226fe4399e6a4a14e01ddb8982100153f92db8a54260d0193b7be6e3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723746400025444792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.k
ubernetes.pod.name: coredns-6f6b679f8f-jcm9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd1b2a93-a0db-4dfb-af2b-19a00b42c1c8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bfbc889c118c15ffba1303b3981a89883fbd3804d655a071311a494d2e9f46,PodSandboxId:2b8cd73ef4a77e7e6f0645cb883fcb59e04c941a5728be2fcfd6f9a58840bc04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723746399981361069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7c5q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8540d2e6-913d-4de2-af50-03eb55031f3d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ec879a08c436eca3ce8776cc57a71ff1727db697f8482308cd9f325e35ac0a,PodSandboxId:7b9a0eb03cef8ffb344a8f39e087503863b169ef1b19c99fd2f1c400c44cf53b,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723746399609309640,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d32d52-6d12-42c5-aff3-30aef5acba7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13eeff30a3d9be1db9be77a1166f0dc20b3e003c95c03705e01f1b08705c903a,PodSandboxId:9f6fb32b6191e973c8f452a41f1fa19d1267efb1acb5265bc4089fd67b756d0f,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723746389163149565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ae7921019bee4226962e30879472e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37e0b62c365765e61d05a452d7588a6df2201d12606b3a46c0ae2d3e272f310c,PodSandboxId:37d2c85e6b4568ad3912f67ac7d186abc675c71cd0d534bef811518eb702f653,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723746389115359069,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdd287c739b315708c51907e5b7704f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4712b566b3c2c3c85c6c952e9b55eeeab21aad7c15ae9150d0ca346a7e5e0b5d,PodSandboxId:3faf396360c037a6031492593203feb4579b683e2c25d9770703d8a30bc368b7,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723746389114212588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198445ae29b67d19beef65cc41bcd878,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc09804a83f7626d2d6dad836e97bae9e089b9680fbca6fb33967f8bdf46fcb8,PodSandboxId:b58e1c5cfafad11ee7e20a7a2416ae5dfe8d79a9a31d294fe78f7e5db424aec3,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723746389082546454,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36a3500532eb27461a1af8a3b2fe8bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2ea30880-9fef-42de-b460-7322a62eb9d2 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.903892080Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0511faf3-3acd-4372-81ba-bff7ce1d5a42 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.903984057Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0511faf3-3acd-4372-81ba-bff7ce1d5a42 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.905400518Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18485504-bf39-4249-83ad-8a52c0e71ca2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.905890840Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746450905864184,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18485504-bf39-4249-83ad-8a52c0e71ca2 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.906529931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2f25143d-e0f5-43f2-bcc4-5ecfcde3cfda name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.906602736Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2f25143d-e0f5-43f2-bcc4-5ecfcde3cfda name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.906968113Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:284f46d77a7f8e094206cc490be55c1d5bab4306dd05d0c8b7368938726f2765,PodSandboxId:0bb469e3f4ba8e0416b5f488602a929170bec9da8b647ab95c8ae660114fac8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723746448529905950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7c5q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8540d2e6-913d-4de2-af50-03eb55031f3d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b70c751e3d95686ed1436ab34181c3a4b37175bf60af65ba1f502fdbce3fd8d3,PodSandboxId:dcbc9e01f9917d3511939ea2f8383a0232433a95fe0b313117ecc11cf7cc0ed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723746448312081485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jcm9h,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: dd1b2a93-a0db-4dfb-af2b-19a00b42c1c8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57393cab224baf9e7d8bb78f5889d5da3d4b2be22e7bea03ca1ab3795bcf088c,PodSandboxId:d3b7e0aa64ad5c21dd42033f16909052a3ecb372cd548b7b95008ecb51fdf829,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1723746447703527448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d32d52-6d12-42c5-aff3-30aef5acba7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1de5bbed5d511f2dff53ccd845e9a6b0f2cbcb56ee1b58273463b1f34445e4,PodSandboxId:2d9996348e0620f889209490d78b6d4ef3a52f649a8b2f0f5e86e1731a33dcd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,C
reatedAt:1723746447657069495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxfhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 571065a8-2d7a-4303-a00a-ba3bf8bd4cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a0fc8bcd8bce72118f06a33747c0615ec59061c8f88240828c808175027ea4,PodSandboxId:da00bd1d79bbab345845dd1973baef346daf7265838185dadd8d1ccf36afe7b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723746442813266771,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdd287c739b315708c51907e5b7704f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b135c35f0f8c88d8d6797ae30171b233757aaee21561762ae6a1c22854597a,PodSandboxId:607ea6a455c6a3b5fe72468b1e25eae45bb4721ac6e6cfb5f4ee466ddff1ccff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723746442765636489,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36a3500532eb27461a1af8a3b2fe8bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8607e2511f554594f9a2c5ba3b2026ee9eee4ce5ab7e635c3cfa8d10eb6153a3,PodSandboxId:697e4a0f915c96c11dcfea0151c3934baff649d0d62960edf806f7b1578e9d4c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723746442738220159,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198445ae29b67d19beef65cc41bcd878,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d29404a5991f500f52e8909ac92aa4ed29ea8bdf087ac506598db24878e16d34,PodSandboxId:d71f20a85ac3270d03450ef941ba927d5b70f11efff2c0a5c5c861d8727c57ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723746442727989444,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ae7921019bee4226962e30879472e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7a92d335315da30ffbeee32bf5b5c46deb0b047a678f6748833363dcb826ff,PodSandboxId:22afee0f9254269fda706ffdec3eba434cfdff5a17ccfc7558d12bbd39661409,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723746400544185526,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxfhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 571065a8-2d7a-4303-a00a-ba3bf8bd4cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42cea24e6e54db8c5a485ef9d76d265c7481a8c328422968af214cccd40161d,PodSandboxId:90353ab3226fe4399e6a4a14e01ddb8982100153f92db8a54260d0193b7be6e3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723746400025444792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.k
ubernetes.pod.name: coredns-6f6b679f8f-jcm9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd1b2a93-a0db-4dfb-af2b-19a00b42c1c8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bfbc889c118c15ffba1303b3981a89883fbd3804d655a071311a494d2e9f46,PodSandboxId:2b8cd73ef4a77e7e6f0645cb883fcb59e04c941a5728be2fcfd6f9a58840bc04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723746399981361069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7c5q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8540d2e6-913d-4de2-af50-03eb55031f3d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ec879a08c436eca3ce8776cc57a71ff1727db697f8482308cd9f325e35ac0a,PodSandboxId:7b9a0eb03cef8ffb344a8f39e087503863b169ef1b19c99fd2f1c400c44cf53b,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723746399609309640,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d32d52-6d12-42c5-aff3-30aef5acba7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13eeff30a3d9be1db9be77a1166f0dc20b3e003c95c03705e01f1b08705c903a,PodSandboxId:9f6fb32b6191e973c8f452a41f1fa19d1267efb1acb5265bc4089fd67b756d0f,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723746389163149565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ae7921019bee4226962e30879472e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37e0b62c365765e61d05a452d7588a6df2201d12606b3a46c0ae2d3e272f310c,PodSandboxId:37d2c85e6b4568ad3912f67ac7d186abc675c71cd0d534bef811518eb702f653,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723746389115359069,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdd287c739b315708c51907e5b7704f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4712b566b3c2c3c85c6c952e9b55eeeab21aad7c15ae9150d0ca346a7e5e0b5d,PodSandboxId:3faf396360c037a6031492593203feb4579b683e2c25d9770703d8a30bc368b7,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723746389114212588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198445ae29b67d19beef65cc41bcd878,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc09804a83f7626d2d6dad836e97bae9e089b9680fbca6fb33967f8bdf46fcb8,PodSandboxId:b58e1c5cfafad11ee7e20a7a2416ae5dfe8d79a9a31d294fe78f7e5db424aec3,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723746389082546454,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36a3500532eb27461a1af8a3b2fe8bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2f25143d-e0f5-43f2-bcc4-5ecfcde3cfda name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.947908766Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=409ce08c-7605-48ce-8055-a24b45c03614 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.947997662Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=409ce08c-7605-48ce-8055-a24b45c03614 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.949536803Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20d34685-3ff1-45fd-8468-7a69b71013d8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.950032779Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746450950008940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20d34685-3ff1-45fd-8468-7a69b71013d8 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.950589962Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=518805f3-a6b9-4e97-9145-85e3b3bd7579 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.950651417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=518805f3-a6b9-4e97-9145-85e3b3bd7579 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.951001814Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:284f46d77a7f8e094206cc490be55c1d5bab4306dd05d0c8b7368938726f2765,PodSandboxId:0bb469e3f4ba8e0416b5f488602a929170bec9da8b647ab95c8ae660114fac8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723746448529905950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7c5q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8540d2e6-913d-4de2-af50-03eb55031f3d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b70c751e3d95686ed1436ab34181c3a4b37175bf60af65ba1f502fdbce3fd8d3,PodSandboxId:dcbc9e01f9917d3511939ea2f8383a0232433a95fe0b313117ecc11cf7cc0ed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723746448312081485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jcm9h,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: dd1b2a93-a0db-4dfb-af2b-19a00b42c1c8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57393cab224baf9e7d8bb78f5889d5da3d4b2be22e7bea03ca1ab3795bcf088c,PodSandboxId:d3b7e0aa64ad5c21dd42033f16909052a3ecb372cd548b7b95008ecb51fdf829,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1723746447703527448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d32d52-6d12-42c5-aff3-30aef5acba7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1de5bbed5d511f2dff53ccd845e9a6b0f2cbcb56ee1b58273463b1f34445e4,PodSandboxId:2d9996348e0620f889209490d78b6d4ef3a52f649a8b2f0f5e86e1731a33dcd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,C
reatedAt:1723746447657069495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxfhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 571065a8-2d7a-4303-a00a-ba3bf8bd4cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a0fc8bcd8bce72118f06a33747c0615ec59061c8f88240828c808175027ea4,PodSandboxId:da00bd1d79bbab345845dd1973baef346daf7265838185dadd8d1ccf36afe7b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723746442813266771,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdd287c739b315708c51907e5b7704f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b135c35f0f8c88d8d6797ae30171b233757aaee21561762ae6a1c22854597a,PodSandboxId:607ea6a455c6a3b5fe72468b1e25eae45bb4721ac6e6cfb5f4ee466ddff1ccff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723746442765636489,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36a3500532eb27461a1af8a3b2fe8bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8607e2511f554594f9a2c5ba3b2026ee9eee4ce5ab7e635c3cfa8d10eb6153a3,PodSandboxId:697e4a0f915c96c11dcfea0151c3934baff649d0d62960edf806f7b1578e9d4c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723746442738220159,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198445ae29b67d19beef65cc41bcd878,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d29404a5991f500f52e8909ac92aa4ed29ea8bdf087ac506598db24878e16d34,PodSandboxId:d71f20a85ac3270d03450ef941ba927d5b70f11efff2c0a5c5c861d8727c57ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723746442727989444,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ae7921019bee4226962e30879472e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7a92d335315da30ffbeee32bf5b5c46deb0b047a678f6748833363dcb826ff,PodSandboxId:22afee0f9254269fda706ffdec3eba434cfdff5a17ccfc7558d12bbd39661409,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723746400544185526,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxfhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 571065a8-2d7a-4303-a00a-ba3bf8bd4cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42cea24e6e54db8c5a485ef9d76d265c7481a8c328422968af214cccd40161d,PodSandboxId:90353ab3226fe4399e6a4a14e01ddb8982100153f92db8a54260d0193b7be6e3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723746400025444792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.k
ubernetes.pod.name: coredns-6f6b679f8f-jcm9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd1b2a93-a0db-4dfb-af2b-19a00b42c1c8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bfbc889c118c15ffba1303b3981a89883fbd3804d655a071311a494d2e9f46,PodSandboxId:2b8cd73ef4a77e7e6f0645cb883fcb59e04c941a5728be2fcfd6f9a58840bc04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723746399981361069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7c5q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8540d2e6-913d-4de2-af50-03eb55031f3d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ec879a08c436eca3ce8776cc57a71ff1727db697f8482308cd9f325e35ac0a,PodSandboxId:7b9a0eb03cef8ffb344a8f39e087503863b169ef1b19c99fd2f1c400c44cf53b,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723746399609309640,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d32d52-6d12-42c5-aff3-30aef5acba7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13eeff30a3d9be1db9be77a1166f0dc20b3e003c95c03705e01f1b08705c903a,PodSandboxId:9f6fb32b6191e973c8f452a41f1fa19d1267efb1acb5265bc4089fd67b756d0f,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723746389163149565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ae7921019bee4226962e30879472e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37e0b62c365765e61d05a452d7588a6df2201d12606b3a46c0ae2d3e272f310c,PodSandboxId:37d2c85e6b4568ad3912f67ac7d186abc675c71cd0d534bef811518eb702f653,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723746389115359069,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdd287c739b315708c51907e5b7704f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4712b566b3c2c3c85c6c952e9b55eeeab21aad7c15ae9150d0ca346a7e5e0b5d,PodSandboxId:3faf396360c037a6031492593203feb4579b683e2c25d9770703d8a30bc368b7,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723746389114212588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198445ae29b67d19beef65cc41bcd878,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc09804a83f7626d2d6dad836e97bae9e089b9680fbca6fb33967f8bdf46fcb8,PodSandboxId:b58e1c5cfafad11ee7e20a7a2416ae5dfe8d79a9a31d294fe78f7e5db424aec3,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723746389082546454,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36a3500532eb27461a1af8a3b2fe8bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=518805f3-a6b9-4e97-9145-85e3b3bd7579 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.984519394Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=472ac13e-fd4e-4917-9af4-a96901eeaa42 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.984590280Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=472ac13e-fd4e-4917-9af4-a96901eeaa42 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.985664832Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9a35473-087c-4571-8baf-06fa9e68aa1b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.986315041Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746450986288096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9a35473-087c-4571-8baf-06fa9e68aa1b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.986986437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=480973c9-c4e2-492a-864c-6aacc7483d02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.987067034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=480973c9-c4e2-492a-864c-6aacc7483d02 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:27:30 kubernetes-upgrade-729203 crio[2273]: time="2024-08-15 18:27:30.987434625Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:284f46d77a7f8e094206cc490be55c1d5bab4306dd05d0c8b7368938726f2765,PodSandboxId:0bb469e3f4ba8e0416b5f488602a929170bec9da8b647ab95c8ae660114fac8b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723746448529905950,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7c5q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8540d2e6-913d-4de2-af50-03eb55031f3d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b70c751e3d95686ed1436ab34181c3a4b37175bf60af65ba1f502fdbce3fd8d3,PodSandboxId:dcbc9e01f9917d3511939ea2f8383a0232433a95fe0b313117ecc11cf7cc0ed2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723746448312081485,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-jcm9h,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: dd1b2a93-a0db-4dfb-af2b-19a00b42c1c8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57393cab224baf9e7d8bb78f5889d5da3d4b2be22e7bea03ca1ab3795bcf088c,PodSandboxId:d3b7e0aa64ad5c21dd42033f16909052a3ecb372cd548b7b95008ecb51fdf829,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1723746447703527448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d32d52-6d12-42c5-aff3-30aef5acba7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a1de5bbed5d511f2dff53ccd845e9a6b0f2cbcb56ee1b58273463b1f34445e4,PodSandboxId:2d9996348e0620f889209490d78b6d4ef3a52f649a8b2f0f5e86e1731a33dcd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,C
reatedAt:1723746447657069495,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxfhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 571065a8-2d7a-4303-a00a-ba3bf8bd4cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5a0fc8bcd8bce72118f06a33747c0615ec59061c8f88240828c808175027ea4,PodSandboxId:da00bd1d79bbab345845dd1973baef346daf7265838185dadd8d1ccf36afe7b6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723746442813266771,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdd287c739b315708c51907e5b7704f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:31b135c35f0f8c88d8d6797ae30171b233757aaee21561762ae6a1c22854597a,PodSandboxId:607ea6a455c6a3b5fe72468b1e25eae45bb4721ac6e6cfb5f4ee466ddff1ccff,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723746442765636489,Labels:map[strin
g]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36a3500532eb27461a1af8a3b2fe8bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8607e2511f554594f9a2c5ba3b2026ee9eee4ce5ab7e635c3cfa8d10eb6153a3,PodSandboxId:697e4a0f915c96c11dcfea0151c3934baff649d0d62960edf806f7b1578e9d4c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723746442738220159,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198445ae29b67d19beef65cc41bcd878,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d29404a5991f500f52e8909ac92aa4ed29ea8bdf087ac506598db24878e16d34,PodSandboxId:d71f20a85ac3270d03450ef941ba927d5b70f11efff2c0a5c5c861d8727c57ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723746442727989444,Labels:map[string]string{io.kubernete
s.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ae7921019bee4226962e30879472e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7a92d335315da30ffbeee32bf5b5c46deb0b047a678f6748833363dcb826ff,PodSandboxId:22afee0f9254269fda706ffdec3eba434cfdff5a17ccfc7558d12bbd39661409,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_EXITED,CreatedAt:1723746400544185526,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dxfhr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 571065a8-2d7a-4303-a00a-ba3bf8bd4cb9,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f42cea24e6e54db8c5a485ef9d76d265c7481a8c328422968af214cccd40161d,PodSandboxId:90353ab3226fe4399e6a4a14e01ddb8982100153f92db8a54260d0193b7be6e3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723746400025444792,Labels:map[string]string{io.kubernetes.container.name: coredns,io.k
ubernetes.pod.name: coredns-6f6b679f8f-jcm9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd1b2a93-a0db-4dfb-af2b-19a00b42c1c8,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9bfbc889c118c15ffba1303b3981a89883fbd3804d655a071311a494d2e9f46,PodSandboxId:2b8cd73ef4a77e7e6f0645cb883fcb59e04c941a5728be2fcfd6f9a58840bc04,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},
ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1723746399981361069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-7c5q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8540d2e6-913d-4de2-af50-03eb55031f3d,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22ec879a08c436eca3ce8776cc57a71ff1727db697f8482308cd9f325e35ac0a,PodSandboxId:7b9a0eb03cef8ffb344a8f39e087503863b169ef1b19c99fd2f1c400c44cf53b,Metadata:&Contain
erMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723746399609309640,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2d32d52-6d12-42c5-aff3-30aef5acba7e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13eeff30a3d9be1db9be77a1166f0dc20b3e003c95c03705e01f1b08705c903a,PodSandboxId:9f6fb32b6191e973c8f452a41f1fa19d1267efb1acb5265bc4089fd67b756d0f,Metadata:&ContainerMetadata{Na
me:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_EXITED,CreatedAt:1723746389163149565,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ae7921019bee4226962e30879472e4,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37e0b62c365765e61d05a452d7588a6df2201d12606b3a46c0ae2d3e272f310c,PodSandboxId:37d2c85e6b4568ad3912f67ac7d186abc675c71cd0d534bef811518eb702f653,Metadata:
&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_EXITED,CreatedAt:1723746389115359069,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdd287c739b315708c51907e5b7704f0,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4712b566b3c2c3c85c6c952e9b55eeeab21aad7c15ae9150d0ca346a7e5e0b5d,PodSandboxId:3faf396360c037a6031492593203feb4579b683e2c25d9770703d8a30bc368b7,Metadata:&Conta
inerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723746389114212588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 198445ae29b67d19beef65cc41bcd878,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc09804a83f7626d2d6dad836e97bae9e089b9680fbca6fb33967f8bdf46fcb8,PodSandboxId:b58e1c5cfafad11ee7e20a7a2416ae5dfe8d79a9a31d294fe78f7e5db424aec3,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1723746389082546454,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-729203,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f36a3500532eb27461a1af8a3b2fe8bc,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=480973c9-c4e2-492a-864c-6aacc7483d02 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	284f46d77a7f8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago        Running             coredns                   1                   0bb469e3f4ba8       coredns-6f6b679f8f-7c5q4
	b70c751e3d956       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago        Running             coredns                   1                   dcbc9e01f9917       coredns-6f6b679f8f-jcm9h
	57393cab224ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       1                   d3b7e0aa64ad5       storage-provisioner
	9a1de5bbed5d5       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   3 seconds ago        Running             kube-proxy                1                   2d9996348e062       kube-proxy-dxfhr
	c5a0fc8bcd8bc       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   8 seconds ago        Running             kube-scheduler            1                   da00bd1d79bba       kube-scheduler-kubernetes-upgrade-729203
	31b135c35f0f8       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   8 seconds ago        Running             etcd                      1                   607ea6a455c6a       etcd-kubernetes-upgrade-729203
	8607e2511f554       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   8 seconds ago        Running             kube-apiserver            1                   697e4a0f915c9       kube-apiserver-kubernetes-upgrade-729203
	d29404a5991f5       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   8 seconds ago        Running             kube-controller-manager   1                   d71f20a85ac32       kube-controller-manager-kubernetes-upgrade-729203
	ec7a92d335315       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   50 seconds ago       Exited              kube-proxy                0                   22afee0f92542       kube-proxy-dxfhr
	f42cea24e6e54       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   51 seconds ago       Exited              coredns                   0                   90353ab3226fe       coredns-6f6b679f8f-jcm9h
	f9bfbc889c118       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   51 seconds ago       Exited              coredns                   0                   2b8cd73ef4a77       coredns-6f6b679f8f-7c5q4
	22ec879a08c43       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   51 seconds ago       Exited              storage-provisioner       0                   7b9a0eb03cef8       storage-provisioner
	13eeff30a3d9b       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Exited              kube-controller-manager   0                   9f6fb32b6191e       kube-controller-manager-kubernetes-upgrade-729203
	37e0b62c36576       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   About a minute ago   Exited              kube-scheduler            0                   37d2c85e6b456       kube-scheduler-kubernetes-upgrade-729203
	4712b566b3c2c       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   About a minute ago   Exited              kube-apiserver            0                   3faf396360c03       kube-apiserver-kubernetes-upgrade-729203
	dc09804a83f76       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Exited              etcd                      0                   b58e1c5cfafad       etcd-kubernetes-upgrade-729203
	
	
	==> coredns [284f46d77a7f8e094206cc490be55c1d5bab4306dd05d0c8b7368938726f2765] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [b70c751e3d95686ed1436ab34181c3a4b37175bf60af65ba1f502fdbce3fd8d3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f42cea24e6e54db8c5a485ef9d76d265c7481a8c328422968af214cccd40161d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[156483649]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 18:26:40.206) (total time: 24878ms):
	Trace[156483649]: [24.878857913s] [24.878857913s] END
	[INFO] plugin/kubernetes: Trace[1011362242]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 18:26:40.205) (total time: 24879ms):
	Trace[1011362242]: [24.879320667s] [24.879320667s] END
	[INFO] plugin/kubernetes: Trace[452872487]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 18:26:40.205) (total time: 24879ms):
	Trace[452872487]: [24.879995704s] [24.879995704s] END
	
	
	==> coredns [f9bfbc889c118c15ffba1303b3981a89883fbd3804d655a071311a494d2e9f46] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[172156368]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 18:26:40.206) (total time: 24873ms):
	Trace[172156368]: [24.873830135s] [24.873830135s] END
	[INFO] plugin/kubernetes: Trace[105100092]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 18:26:40.206) (total time: 24873ms):
	Trace[105100092]: [24.873409454s] [24.873409454s] END
	[INFO] plugin/kubernetes: Trace[1288788504]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 18:26:40.205) (total time: 24874ms):
	Trace[1288788504]: [24.874945557s] [24.874945557s] END
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-729203
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-729203
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 18:26:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-729203
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:27:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 18:27:26 +0000   Thu, 15 Aug 2024 18:26:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 18:27:26 +0000   Thu, 15 Aug 2024 18:26:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 18:27:26 +0000   Thu, 15 Aug 2024 18:26:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 18:27:26 +0000   Thu, 15 Aug 2024 18:26:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.3
	  Hostname:    kubernetes-upgrade-729203
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5609b5d2a58e40b5803913b237b7ef5f
	  System UUID:                5609b5d2-a58e-40b5-8039-13b237b7ef5f
	  Boot ID:                    ca42c143-6d8f-45b2-a0ca-ea62492dac9f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-7c5q4                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     52s
	  kube-system                 coredns-6f6b679f8f-jcm9h                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     52s
	  kube-system                 etcd-kubernetes-upgrade-729203                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         56s
	  kube-system                 kube-apiserver-kubernetes-upgrade-729203             250m (12%)    0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-729203    200m (10%)    0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-proxy-dxfhr                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-scheduler-kubernetes-upgrade-729203             100m (5%)     0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 50s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node kubernetes-upgrade-729203 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node kubernetes-upgrade-729203 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x7 over 63s)  kubelet          Node kubernetes-upgrade-729203 status is now: NodeHasSufficientPID
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           53s                node-controller  Node kubernetes-upgrade-729203 event: Registered Node kubernetes-upgrade-729203 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-729203 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node kubernetes-upgrade-729203 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x7 over 9s)    kubelet          Node kubernetes-upgrade-729203 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2s                 node-controller  Node kubernetes-upgrade-729203 event: Registered Node kubernetes-upgrade-729203 in Controller
	
	
	==> dmesg <==
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.415744] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.079217] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.086734] systemd-fstab-generator[585]: Ignoring "noauto" option for root device
	[  +0.218735] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.150801] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.309217] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +4.692933] systemd-fstab-generator[736]: Ignoring "noauto" option for root device
	[  +0.080703] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.391660] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[  +7.698410] systemd-fstab-generator[1241]: Ignoring "noauto" option for root device
	[  +0.087859] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.000092] kauditd_printk_skb: 90 callbacks suppressed
	[Aug15 18:27] systemd-fstab-generator[2192]: Ignoring "noauto" option for root device
	[  +0.088122] kauditd_printk_skb: 3 callbacks suppressed
	[  +0.062522] systemd-fstab-generator[2204]: Ignoring "noauto" option for root device
	[  +0.163957] systemd-fstab-generator[2218]: Ignoring "noauto" option for root device
	[  +0.136095] systemd-fstab-generator[2230]: Ignoring "noauto" option for root device
	[  +0.283329] systemd-fstab-generator[2258]: Ignoring "noauto" option for root device
	[  +6.653607] systemd-fstab-generator[2412]: Ignoring "noauto" option for root device
	[  +0.083307] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.601414] systemd-fstab-generator[2534]: Ignoring "noauto" option for root device
	[  +5.612432] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.359206] systemd-fstab-generator[3445]: Ignoring "noauto" option for root device
	
	
	==> etcd [31b135c35f0f8c88d8d6797ae30171b233757aaee21561762ae6a1c22854597a] <==
	{"level":"info","ts":"2024-08-15T18:27:23.148650Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"55439cb767dc52b5","local-member-id":"628f235e18d35f8b","added-peer-id":"628f235e18d35f8b","added-peer-peer-urls":["https://192.168.50.3:2380"]}
	{"level":"info","ts":"2024-08-15T18:27:23.148837Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"55439cb767dc52b5","local-member-id":"628f235e18d35f8b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:27:23.148887Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:27:23.151995Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:27:23.156006Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T18:27:23.156104Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.3:2380"}
	{"level":"info","ts":"2024-08-15T18:27:23.156303Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.3:2380"}
	{"level":"info","ts":"2024-08-15T18:27:23.159764Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"628f235e18d35f8b","initial-advertise-peer-urls":["https://192.168.50.3:2380"],"listen-peer-urls":["https://192.168.50.3:2380"],"advertise-client-urls":["https://192.168.50.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T18:27:23.159881Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T18:27:24.725220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"628f235e18d35f8b is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T18:27:24.725327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"628f235e18d35f8b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T18:27:24.725384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"628f235e18d35f8b received MsgPreVoteResp from 628f235e18d35f8b at term 2"}
	{"level":"info","ts":"2024-08-15T18:27:24.725415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"628f235e18d35f8b became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T18:27:24.725447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"628f235e18d35f8b received MsgVoteResp from 628f235e18d35f8b at term 3"}
	{"level":"info","ts":"2024-08-15T18:27:24.725474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"628f235e18d35f8b became leader at term 3"}
	{"level":"info","ts":"2024-08-15T18:27:24.725499Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 628f235e18d35f8b elected leader 628f235e18d35f8b at term 3"}
	{"level":"info","ts":"2024-08-15T18:27:24.730815Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"628f235e18d35f8b","local-member-attributes":"{Name:kubernetes-upgrade-729203 ClientURLs:[https://192.168.50.3:2379]}","request-path":"/0/members/628f235e18d35f8b/attributes","cluster-id":"55439cb767dc52b5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T18:27:24.730934Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:27:24.731177Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T18:27:24.731217Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T18:27:24.731233Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:27:24.732110Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:27:24.732164Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:27:24.733088Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.3:2379"}
	{"level":"info","ts":"2024-08-15T18:27:24.733296Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [dc09804a83f7626d2d6dad836e97bae9e089b9680fbca6fb33967f8bdf46fcb8] <==
	{"level":"info","ts":"2024-08-15T18:26:30.285048Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:26:30.285100Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T18:26:30.285130Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T18:26:30.285159Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:26:30.286304Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:26:30.287556Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T18:26:30.297561Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:26:30.302949Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.3:2379"}
	{"level":"info","ts":"2024-08-15T18:26:52.251438Z","caller":"traceutil/trace.go:171","msg":"trace[465585305] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"115.093559ms","start":"2024-08-15T18:26:52.136315Z","end":"2024-08-15T18:26:52.251409Z","steps":["trace[465585305] 'process raft request'  (duration: 114.917533ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:26:52.460127Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.683334ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6884756259464906651 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-3obhpjiqfn7abolfmeyekjapc4\" mod_revision:379 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-3obhpjiqfn7abolfmeyekjapc4\" value_size:616 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-3obhpjiqfn7abolfmeyekjapc4\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-15T18:26:52.460775Z","caller":"traceutil/trace.go:171","msg":"trace[1811642799] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"225.598041ms","start":"2024-08-15T18:26:52.235162Z","end":"2024-08-15T18:26:52.460761Z","steps":["trace[1811642799] 'process raft request'  (duration: 81.280317ms)","trace[1811642799] 'compare'  (duration: 138.516502ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:26:53.878055Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.031476ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:26:53.878295Z","caller":"traceutil/trace.go:171","msg":"trace[1222537043] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:394; }","duration":"237.28317ms","start":"2024-08-15T18:26:53.640991Z","end":"2024-08-15T18:26:53.878274Z","steps":["trace[1222537043] 'agreement among raft nodes before linearized reading'  (duration: 32.406348ms)","trace[1222537043] 'range keys from in-memory index tree'  (duration: 204.614214ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T18:26:53.878436Z","caller":"traceutil/trace.go:171","msg":"trace[2027731416] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"204.287916ms","start":"2024-08-15T18:26:53.674139Z","end":"2024-08-15T18:26:53.878427Z","steps":["trace[2027731416] 'process raft request'  (duration: 125.768713ms)","trace[2027731416] 'compare'  (duration: 77.994687ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T18:27:05.086082Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-15T18:27:05.086164Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"kubernetes-upgrade-729203","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.3:2380"],"advertise-client-urls":["https://192.168.50.3:2379"]}
	{"level":"warn","ts":"2024-08-15T18:27:05.086299Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T18:27:05.086421Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/08/15 18:27:05 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-15T18:27:05.142879Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.3:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-15T18:27:05.142969Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.3:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-15T18:27:05.144337Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"628f235e18d35f8b","current-leader-member-id":"628f235e18d35f8b"}
	{"level":"info","ts":"2024-08-15T18:27:05.146946Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.50.3:2380"}
	{"level":"info","ts":"2024-08-15T18:27:05.147030Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.50.3:2380"}
	{"level":"info","ts":"2024-08-15T18:27:05.147054Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"kubernetes-upgrade-729203","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.3:2380"],"advertise-client-urls":["https://192.168.50.3:2379"]}
	
	
	==> kernel <==
	 18:27:31 up 1 min,  0 users,  load average: 0.81, 0.26, 0.09
	Linux kubernetes-upgrade-729203 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4712b566b3c2c3c85c6c952e9b55eeeab21aad7c15ae9150d0ca346a7e5e0b5d] <==
	I0815 18:27:05.095385       1 remote_available_controller.go:427] Shutting down RemoteAvailability controller
	I0815 18:27:05.095396       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0815 18:27:05.095403       1 controller.go:132] Ending legacy_token_tracking_controller
	I0815 18:27:05.095407       1 controller.go:133] Shutting down legacy_token_tracking_controller
	I0815 18:27:05.095416       1 customresource_discovery_controller.go:328] Shutting down DiscoveryController
	I0815 18:27:05.095428       1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller
	I0815 18:27:05.095437       1 crdregistration_controller.go:145] Shutting down crd-autoregister controller
	I0815 18:27:05.095447       1 apf_controller.go:389] Shutting down API Priority and Fairness config worker
	I0815 18:27:05.095465       1 local_available_controller.go:172] Shutting down LocalAvailability controller
	I0815 18:27:05.098315       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 18:27:05.099119       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0815 18:27:05.099546       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0815 18:27:05.099619       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0815 18:27:05.099641       1 dynamic_cafile_content.go:174] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0815 18:27:05.099729       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0815 18:27:05.100557       1 controller.go:157] Shutting down quota evaluator
	I0815 18:27:05.100599       1 controller.go:176] quota evaluator worker shutdown
	I0815 18:27:05.101263       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0815 18:27:05.101310       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0815 18:27:05.103157       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	E0815 18:27:05.103810       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0815 18:27:05.105573       1 controller.go:176] quota evaluator worker shutdown
	I0815 18:27:05.105605       1 controller.go:176] quota evaluator worker shutdown
	I0815 18:27:05.105612       1 controller.go:176] quota evaluator worker shutdown
	I0815 18:27:05.105618       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [8607e2511f554594f9a2c5ba3b2026ee9eee4ce5ab7e635c3cfa8d10eb6153a3] <==
	I0815 18:27:26.133639       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0815 18:27:26.139497       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0815 18:27:26.148741       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 18:27:26.148885       1 shared_informer.go:320] Caches are synced for configmaps
	I0815 18:27:26.148944       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0815 18:27:26.148978       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0815 18:27:26.148984       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0815 18:27:26.149067       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0815 18:27:26.156752       1 aggregator.go:171] initial CRD sync complete...
	I0815 18:27:26.156781       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 18:27:26.156787       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 18:27:26.156792       1 cache.go:39] Caches are synced for autoregister controller
	I0815 18:27:26.157520       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 18:27:26.187885       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0815 18:27:26.198124       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 18:27:26.198208       1 policy_source.go:224] refreshing policies
	I0815 18:27:26.243857       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 18:27:27.038163       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 18:27:27.930287       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 18:27:28.221104       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 18:27:28.353081       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 18:27:28.472629       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 18:27:28.628602       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 18:27:28.643628       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 18:27:29.789203       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [13eeff30a3d9be1db9be77a1166f0dc20b3e003c95c03705e01f1b08705c903a] <==
	I0815 18:26:38.509445       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="kubernetes-upgrade-729203" podCIDRs=["10.244.0.0/24"]
	I0815 18:26:38.509488       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-729203"
	I0815 18:26:38.509650       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-729203"
	I0815 18:26:38.530180       1 shared_informer.go:320] Caches are synced for taint
	I0815 18:26:38.530187       1 shared_informer.go:320] Caches are synced for HPA
	I0815 18:26:38.530673       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0815 18:26:38.533089       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-729203"
	I0815 18:26:38.533244       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0815 18:26:38.546559       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-729203"
	I0815 18:26:38.580519       1 shared_informer.go:320] Caches are synced for disruption
	I0815 18:26:38.614424       1 shared_informer.go:320] Caches are synced for stateful set
	I0815 18:26:38.690745       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 18:26:38.692000       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 18:26:38.729753       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0815 18:26:38.734855       1 shared_informer.go:320] Caches are synced for endpoint
	I0815 18:26:38.993081       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-729203"
	I0815 18:26:39.121450       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 18:26:39.129857       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 18:26:39.129977       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0815 18:26:39.443055       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="195.933499ms"
	I0815 18:26:39.498413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="55.259075ms"
	I0815 18:26:39.498509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="43.64µs"
	I0815 18:26:40.546462       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="90.812µs"
	I0815 18:26:40.573267       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="70.54µs"
	I0815 18:26:42.019250       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-729203"
	
	
	==> kube-controller-manager [d29404a5991f500f52e8909ac92aa4ed29ea8bdf087ac506598db24878e16d34] <==
	I0815 18:27:29.542885       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-729203"
	I0815 18:27:29.548143       1 shared_informer.go:320] Caches are synced for TTL
	I0815 18:27:29.553816       1 shared_informer.go:320] Caches are synced for daemon sets
	I0815 18:27:29.558599       1 shared_informer.go:320] Caches are synced for GC
	I0815 18:27:29.592834       1 shared_informer.go:320] Caches are synced for node
	I0815 18:27:29.592890       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0815 18:27:29.592913       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0815 18:27:29.592918       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0815 18:27:29.592922       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0815 18:27:29.592976       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-729203"
	I0815 18:27:29.638273       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0815 18:27:29.649507       1 shared_informer.go:320] Caches are synced for stateful set
	I0815 18:27:29.656947       1 shared_informer.go:320] Caches are synced for ephemeral
	I0815 18:27:29.671285       1 shared_informer.go:320] Caches are synced for expand
	I0815 18:27:29.688154       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0815 18:27:29.699024       1 shared_informer.go:320] Caches are synced for PVC protection
	I0815 18:27:29.717001       1 shared_informer.go:320] Caches are synced for persistent volume
	I0815 18:27:29.722405       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 18:27:29.724784       1 shared_informer.go:320] Caches are synced for attach detach
	I0815 18:27:29.735462       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 18:27:30.148619       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 18:27:30.148675       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0815 18:27:30.172034       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 18:27:30.688253       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="55.59555ms"
	I0815 18:27:30.689471       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="60.343µs"
	
	
	==> kube-proxy [9a1de5bbed5d511f2dff53ccd845e9a6b0f2cbcb56ee1b58273463b1f34445e4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 18:27:28.138996       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 18:27:28.164805       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.3"]
	E0815 18:27:28.165020       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 18:27:28.294189       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 18:27:28.294225       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 18:27:28.294287       1 server_linux.go:169] "Using iptables Proxier"
	I0815 18:27:28.305934       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 18:27:28.306377       1 server.go:483] "Version info" version="v1.31.0"
	I0815 18:27:28.306586       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:27:28.310165       1 config.go:197] "Starting service config controller"
	I0815 18:27:28.310242       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 18:27:28.310272       1 config.go:104] "Starting endpoint slice config controller"
	I0815 18:27:28.310380       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 18:27:28.313581       1 config.go:326] "Starting node config controller"
	I0815 18:27:28.314868       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 18:27:28.411585       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 18:27:28.411671       1 shared_informer.go:320] Caches are synced for service config
	I0815 18:27:28.415141       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ec7a92d335315da30ffbeee32bf5b5c46deb0b047a678f6748833363dcb826ff] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 18:26:40.735069       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 18:26:40.749336       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.3"]
	E0815 18:26:40.749467       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 18:26:40.788409       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 18:26:40.788450       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 18:26:40.788474       1 server_linux.go:169] "Using iptables Proxier"
	I0815 18:26:40.791051       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 18:26:40.791351       1 server.go:483] "Version info" version="v1.31.0"
	I0815 18:26:40.791408       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:26:40.793154       1 config.go:197] "Starting service config controller"
	I0815 18:26:40.793205       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 18:26:40.793237       1 config.go:104] "Starting endpoint slice config controller"
	I0815 18:26:40.793253       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 18:26:40.793766       1 config.go:326] "Starting node config controller"
	I0815 18:26:40.794841       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 18:26:40.893616       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 18:26:40.893665       1 shared_informer.go:320] Caches are synced for service config
	I0815 18:26:40.895004       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [37e0b62c365765e61d05a452d7588a6df2201d12606b3a46c0ae2d3e272f310c] <==
	E0815 18:26:32.651730       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:26:32.663467       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 18:26:32.663534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 18:26:32.689102       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 18:26:32.689149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:26:32.699818       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 18:26:32.701887       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:26:32.733650       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 18:26:32.733742       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 18:26:32.737870       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 18:26:32.737923       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 18:26:32.806412       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 18:26:32.806592       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:26:32.957463       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 18:26:32.957608       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:26:33.047051       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 18:26:33.047121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:26:33.083958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 18:26:33.084061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:26:33.103790       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 18:26:33.103827       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:26:33.122367       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 18:26:33.123067       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0815 18:26:35.058450       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0815 18:27:05.077769       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c5a0fc8bcd8bce72118f06a33747c0615ec59061c8f88240828c808175027ea4] <==
	I0815 18:27:23.578962       1 serving.go:386] Generated self-signed cert in-memory
	W0815 18:27:26.089631       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 18:27:26.089841       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 18:27:26.089872       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 18:27:26.089979       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 18:27:26.161503       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 18:27:26.162402       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:27:26.164988       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 18:27:26.165124       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 18:27:26.165173       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 18:27:26.165209       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 18:27:26.265313       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 18:27:22 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:22.283470    2541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/198445ae29b67d19beef65cc41bcd878-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-729203\" (UID: \"198445ae29b67d19beef65cc41bcd878\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-729203"
	Aug 15 18:27:22 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:22.283486    2541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/34ae7921019bee4226962e30879472e4-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-729203\" (UID: \"34ae7921019bee4226962e30879472e4\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-729203"
	Aug 15 18:27:22 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:22.283509    2541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34ae7921019bee4226962e30879472e4-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-729203\" (UID: \"34ae7921019bee4226962e30879472e4\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-729203"
	Aug 15 18:27:22 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:22.283523    2541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/34ae7921019bee4226962e30879472e4-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-729203\" (UID: \"34ae7921019bee4226962e30879472e4\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-729203"
	Aug 15 18:27:22 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:22.283539    2541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bdd287c739b315708c51907e5b7704f0-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-729203\" (UID: \"bdd287c739b315708c51907e5b7704f0\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-729203"
	Aug 15 18:27:22 kubernetes-upgrade-729203 kubelet[2541]: E0815 18:27:22.285115    2541 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-729203?timeout=10s\": dial tcp 192.168.50.3:8443: connect: connection refused" interval="400ms"
	Aug 15 18:27:22 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:22.446261    2541 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-729203"
	Aug 15 18:27:22 kubernetes-upgrade-729203 kubelet[2541]: E0815 18:27:22.447131    2541 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.3:8443: connect: connection refused" node="kubernetes-upgrade-729203"
	Aug 15 18:27:22 kubernetes-upgrade-729203 kubelet[2541]: E0815 18:27:22.687968    2541 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-729203?timeout=10s\": dial tcp 192.168.50.3:8443: connect: connection refused" interval="800ms"
	Aug 15 18:27:22 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:22.849581    2541 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-729203"
	Aug 15 18:27:22 kubernetes-upgrade-729203 kubelet[2541]: E0815 18:27:22.850945    2541 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.3:8443: connect: connection refused" node="kubernetes-upgrade-729203"
	Aug 15 18:27:22 kubernetes-upgrade-729203 kubelet[2541]: W0815 18:27:22.918795    2541 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-729203&limit=500&resourceVersion=0": dial tcp 192.168.50.3:8443: connect: connection refused
	Aug 15 18:27:22 kubernetes-upgrade-729203 kubelet[2541]: E0815 18:27:22.918864    2541 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dkubernetes-upgrade-729203&limit=500&resourceVersion=0\": dial tcp 192.168.50.3:8443: connect: connection refused" logger="UnhandledError"
	Aug 15 18:27:23 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:23.652871    2541 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-729203"
	Aug 15 18:27:26 kubernetes-upgrade-729203 kubelet[2541]: E0815 18:27:26.227261    2541 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-kubernetes-upgrade-729203\" already exists" pod="kube-system/etcd-kubernetes-upgrade-729203"
	Aug 15 18:27:26 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:26.291531    2541 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-729203"
	Aug 15 18:27:26 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:26.291824    2541 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-729203"
	Aug 15 18:27:26 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:26.291936    2541 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 15 18:27:26 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:26.293181    2541 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 15 18:27:27 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:27.054747    2541 apiserver.go:52] "Watching apiserver"
	Aug 15 18:27:27 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:27.081175    2541 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Aug 15 18:27:27 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:27.127349    2541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b2d32d52-6d12-42c5-aff3-30aef5acba7e-tmp\") pod \"storage-provisioner\" (UID: \"b2d32d52-6d12-42c5-aff3-30aef5acba7e\") " pod="kube-system/storage-provisioner"
	Aug 15 18:27:27 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:27.127631    2541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/571065a8-2d7a-4303-a00a-ba3bf8bd4cb9-lib-modules\") pod \"kube-proxy-dxfhr\" (UID: \"571065a8-2d7a-4303-a00a-ba3bf8bd4cb9\") " pod="kube-system/kube-proxy-dxfhr"
	Aug 15 18:27:27 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:27.128281    2541 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/571065a8-2d7a-4303-a00a-ba3bf8bd4cb9-xtables-lock\") pod \"kube-proxy-dxfhr\" (UID: \"571065a8-2d7a-4303-a00a-ba3bf8bd4cb9\") " pod="kube-system/kube-proxy-dxfhr"
	Aug 15 18:27:30 kubernetes-upgrade-729203 kubelet[2541]: I0815 18:27:30.615128    2541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [22ec879a08c436eca3ce8776cc57a71ff1727db697f8482308cd9f325e35ac0a] <==
	I0815 18:26:39.700218       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	
	
	==> storage-provisioner [57393cab224baf9e7d8bb78f5889d5da3d4b2be22e7bea03ca1ab3795bcf088c] <==
	I0815 18:27:27.879865       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 18:27:27.921985       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 18:27:27.922041       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 18:27:27.938561       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 18:27:27.939315       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7f2e5349-5d8b-4995-a168-a2bb190fcad9", APIVersion:"v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-729203_ec86fb50-4a02-4fef-9d01-5f1544b13bab became leader
	I0815 18:27:27.939406       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-729203_ec86fb50-4a02-4fef-9d01-5f1544b13bab!
	I0815 18:27:28.040906       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-729203_ec86fb50-4a02-4fef-9d01-5f1544b13bab!
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:27:30.435460   65495 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19450-13013/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-729203 -n kubernetes-upgrade-729203
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-729203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-729203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-729203
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-729203: (1.598819061s)
--- FAIL: TestKubernetesUpgrade (419.05s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (10.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-728850 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p pause-728850 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 63 (6.100247956s)

                                                
                                                
-- stdout --
	* [pause-728850] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 18:23:43.274946   60010 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:23:43.275057   60010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:23:43.275066   60010 out.go:358] Setting ErrFile to fd 2...
	I0815 18:23:43.275070   60010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:23:43.275284   60010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:23:43.275798   60010 out.go:352] Setting JSON to false
	I0815 18:23:43.276769   60010 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7569,"bootTime":1723738654,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:23:43.276826   60010 start.go:139] virtualization: kvm guest
	I0815 18:23:43.279206   60010 out.go:177] * [pause-728850] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:23:43.280477   60010 notify.go:220] Checking for updates...
	I0815 18:23:43.280502   60010 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:23:43.281923   60010 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:23:43.283279   60010 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:23:43.284735   60010 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:23:43.286045   60010 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:23:43.287637   60010 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:23:43.289620   60010 config.go:182] Loaded profile config "pause-728850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:23:43.290207   60010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:23:43.290286   60010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:23:43.305567   60010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37279
	I0815 18:23:43.305962   60010 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:23:43.306466   60010 main.go:141] libmachine: Using API Version  1
	I0815 18:23:43.306485   60010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:23:43.306846   60010 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:23:43.307010   60010 main.go:141] libmachine: (pause-728850) Calling .DriverName
	I0815 18:23:43.307238   60010 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:23:43.307504   60010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:23:43.307534   60010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:23:43.321909   60010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42083
	I0815 18:23:43.322368   60010 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:23:43.322892   60010 main.go:141] libmachine: Using API Version  1
	I0815 18:23:43.322915   60010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:23:43.323204   60010 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:23:43.323390   60010 main.go:141] libmachine: (pause-728850) Calling .DriverName
	I0815 18:23:49.325817   60010 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:23:49.327539   60010 start.go:297] selected driver: kvm2
	I0815 18:23:49.327556   60010 start.go:901] validating driver "kvm2" against &{Name:pause-728850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-728850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devic
e-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:23:49.327679   60010 start.go:912] status for kvm2: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:/usr/bin/virsh domcapabilities --virttype kvm timed out Reason: Fix:Check that the libvirtd service is running and the socket is ready Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/ Version:}
	I0815 18:23:49.329326   60010 out.go:201] 
	W0815 18:23:49.330627   60010 out.go:270] X Exiting due to PROVIDER_KVM2_NOT_RUNNING: /usr/bin/virsh domcapabilities --virttype kvm timed out
	X Exiting due to PROVIDER_KVM2_NOT_RUNNING: /usr/bin/virsh domcapabilities --virttype kvm timed out
	W0815 18:23:49.330685   60010 out.go:270] * Suggestion: Check that the libvirtd service is running and the socket is ready
	* Suggestion: Check that the libvirtd service is running and the socket is ready
	W0815 18:23:49.330717   60010 out.go:270] * Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	I0815 18:23:49.332050   60010 out.go:201] 

                                                
                                                
** /stderr **
pause_test.go:94: failed to second start a running minikube with args: "out/minikube-linux-amd64 start -p pause-728850 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio" : exit status 63
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-728850 -n pause-728850
helpers_test.go:239: (dbg) Done: out/minikube-linux-amd64 status --format={{.Host}} -p pause-728850 -n pause-728850: (3.080244723s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-728850 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-728850 logs -n 25: (1.071475324s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p scheduled-stop-028675       | scheduled-stop-028675     | jenkins | v1.33.1 | 15 Aug 24 18:19 UTC |                     |
	|         | --schedule 5m                  |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-028675       | scheduled-stop-028675     | jenkins | v1.33.1 | 15 Aug 24 18:19 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-028675       | scheduled-stop-028675     | jenkins | v1.33.1 | 15 Aug 24 18:19 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-028675       | scheduled-stop-028675     | jenkins | v1.33.1 | 15 Aug 24 18:19 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-028675       | scheduled-stop-028675     | jenkins | v1.33.1 | 15 Aug 24 18:19 UTC | 15 Aug 24 18:19 UTC |
	|         | --cancel-scheduled             |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-028675       | scheduled-stop-028675     | jenkins | v1.33.1 | 15 Aug 24 18:19 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-028675       | scheduled-stop-028675     | jenkins | v1.33.1 | 15 Aug 24 18:19 UTC |                     |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| stop    | -p scheduled-stop-028675       | scheduled-stop-028675     | jenkins | v1.33.1 | 15 Aug 24 18:19 UTC | 15 Aug 24 18:20 UTC |
	|         | --schedule 15s                 |                           |         |         |                     |                     |
	| delete  | -p scheduled-stop-028675       | scheduled-stop-028675     | jenkins | v1.33.1 | 15 Aug 24 18:20 UTC | 15 Aug 24 18:20 UTC |
	| start   | -p NoKubernetes-692760         | NoKubernetes-692760       | jenkins | v1.33.1 | 15 Aug 24 18:20 UTC |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p offline-crio-681307         | offline-crio-681307       | jenkins | v1.33.1 | 15 Aug 24 18:20 UTC | 15 Aug 24 18:21 UTC |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --memory=2048             |                           |         |         |                     |                     |
	|         | --wait=true --driver=kvm2      |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-729203   | kubernetes-upgrade-729203 | jenkins | v1.33.1 | 15 Aug 24 18:20 UTC |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-692760         | NoKubernetes-692760       | jenkins | v1.33.1 | 15 Aug 24 18:20 UTC | 15 Aug 24 18:22 UTC |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-708889      | minikube                  | jenkins | v1.26.0 | 15 Aug 24 18:20 UTC | 15 Aug 24 18:22 UTC |
	|         | --memory=2200 --vm-driver=kvm2 |                           |         |         |                     |                     |
	|         |  --container-runtime=crio      |                           |         |         |                     |                     |
	| delete  | -p offline-crio-681307         | offline-crio-681307       | jenkins | v1.33.1 | 15 Aug 24 18:21 UTC | 15 Aug 24 18:21 UTC |
	| start   | -p pause-728850 --memory=2048  | pause-728850              | jenkins | v1.33.1 | 15 Aug 24 18:21 UTC | 15 Aug 24 18:23 UTC |
	|         | --install-addons=false         |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2       |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-692760         | NoKubernetes-692760       | jenkins | v1.33.1 | 15 Aug 24 18:22 UTC | 15 Aug 24 18:22 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-708889      | running-upgrade-708889    | jenkins | v1.33.1 | 15 Aug 24 18:22 UTC | 15 Aug 24 18:23 UTC |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-692760         | NoKubernetes-692760       | jenkins | v1.33.1 | 15 Aug 24 18:22 UTC | 15 Aug 24 18:22 UTC |
	| start   | -p NoKubernetes-692760         | NoKubernetes-692760       | jenkins | v1.33.1 | 15 Aug 24 18:22 UTC | 15 Aug 24 18:23 UTC |
	|         | --no-kubernetes --driver=kvm2  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-692760 sudo    | NoKubernetes-692760       | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-692760         | NoKubernetes-692760       | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC | 15 Aug 24 18:23 UTC |
	| start   | -p NoKubernetes-692760         | NoKubernetes-692760       | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | --driver=kvm2                  |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-708889      | running-upgrade-708889    | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	| start   | -p pause-728850                | pause-728850              | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2             |                           |         |         |                     |                     |
	|         | --container-runtime=crio       |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:23:43
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:23:43.274946   60010 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:23:43.275057   60010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:23:43.275066   60010 out.go:358] Setting ErrFile to fd 2...
	I0815 18:23:43.275070   60010 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:23:43.275284   60010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:23:43.275798   60010 out.go:352] Setting JSON to false
	I0815 18:23:43.276769   60010 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7569,"bootTime":1723738654,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:23:43.276826   60010 start.go:139] virtualization: kvm guest
	I0815 18:23:43.279206   60010 out.go:177] * [pause-728850] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:23:43.280477   60010 notify.go:220] Checking for updates...
	I0815 18:23:43.280502   60010 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:23:43.281923   60010 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:23:43.283279   60010 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:23:43.284735   60010 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:23:43.286045   60010 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:23:43.287637   60010 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:23:43.289620   60010 config.go:182] Loaded profile config "pause-728850": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:23:43.290207   60010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:23:43.290286   60010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:23:43.305567   60010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37279
	I0815 18:23:43.305962   60010 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:23:43.306466   60010 main.go:141] libmachine: Using API Version  1
	I0815 18:23:43.306485   60010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:23:43.306846   60010 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:23:43.307010   60010 main.go:141] libmachine: (pause-728850) Calling .DriverName
	I0815 18:23:43.307238   60010 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:23:43.307504   60010 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:23:43.307534   60010 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:23:43.321909   60010 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42083
	I0815 18:23:43.322368   60010 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:23:43.322892   60010 main.go:141] libmachine: Using API Version  1
	I0815 18:23:43.322915   60010 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:23:43.323204   60010 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:23:43.323390   60010 main.go:141] libmachine: (pause-728850) Calling .DriverName
	I0815 18:23:49.325817   60010 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:23:49.327539   60010 start.go:297] selected driver: kvm2
	I0815 18:23:49.327556   60010 start.go:901] validating driver "kvm2" against &{Name:pause-728850 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.0 ClusterName:pause-728850 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.4 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devic
e-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:23:49.327679   60010 start.go:912] status for kvm2: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:/usr/bin/virsh domcapabilities --virttype kvm timed out Reason: Fix:Check that the libvirtd service is running and the socket is ready Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/ Version:}
	I0815 18:23:49.329326   60010 out.go:201] 
	W0815 18:23:49.330627   60010 out.go:270] X Exiting due to PROVIDER_KVM2_NOT_RUNNING: /usr/bin/virsh domcapabilities --virttype kvm timed out
	W0815 18:23:49.330685   60010 out.go:270] * Suggestion: Check that the libvirtd service is running and the socket is ready
	W0815 18:23:49.330717   60010 out.go:270] * Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	I0815 18:23:49.332050   60010 out.go:201] 
	
	
	==> CRI-O <==
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.793672112Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08817bfa-79d2-47e3-bb76-48ac50ad050b name=/runtime.v1.RuntimeService/Version
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.797278781Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=446627a7-1cb8-4dd2-9b0a-db629407f435 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.797945287Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746232797916897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=446627a7-1cb8-4dd2-9b0a-db629407f435 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.798822984Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1c1e328-11eb-4109-9a5b-3fc6e5945fd4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.798895057Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1c1e328-11eb-4109-9a5b-3fc6e5945fd4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.799038213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b37b3836115cd1b1059f582c842e9ef109124d229b5bb1d1e12acd7a466c035,PodSandboxId:e84b0fbacafd63e050f150bf133b1281cb652421811847a1cf159a7e57c0f100,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723746180492877934,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hv42g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27546ead-98d1-4bc2-a85f-7ca0b28e8766,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6322ee37f74d4a951c402e42fbbbccdb2b577238f65ca4a15ce704bb00decb6,PodSandboxId:a1a12c9f3f2f9596e5617065a40d1aa3ec4a4e5797f584051d41130bddd1aedd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723746180022589426,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn6f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 08e84521-f0ab-4ba8-84ee-a0fb14e127cc,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38304578e0c6be2ff7f48b4f0bf819bec93f54d4e3c32991c30278f2844d58af,PodSandboxId:4f14cda7e99c821fe0b10d93650dc9ae154c736a1a21d45967798ac4fb0bd4a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723746165445708769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffd6553258b
a8681c1b642192202163,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb173355d7b6c9ab63e16aed3830981fddaf4af446337e96adf74e6dd4f249e,PodSandboxId:a3a49742d640938de40fe46bc8a9eb8d928667d5c46dd31ff64644be66b59c24,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723746165384451303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880bb4c02e2642c112cd398a891c7ac5,},Annotations:map[string]st
ring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7340f9776605f577e228cf560f9a56b5af878b863621f65f0945838e97f1d3,PodSandboxId:4806d785b90a02bdf3fcaf7b85ea783e4f84eadadee416e1b79ffd7a07f9d742,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723746165421147178,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff45c67b48194866b11b4af5592a1027,},Annotations:map[string]string{io.kubernetes
.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:664bae0dce755595eeba937dd25281ebc4c62deb7e4be92c7e335a964b41e9b9,PodSandboxId:462a311cd71b76a01d880be93a74639f46c5c53f10566d8a8f22ca2103f0c1fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723746165390910446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cfabc89ce77c2ab98c9f9faf29d46e4,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1c1e328-11eb-4109-9a5b-3fc6e5945fd4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.838312556Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=540e2234-08c3-44ac-8243-709f94ef35d5 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.838410745Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=540e2234-08c3-44ac-8243-709f94ef35d5 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.839468132Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a1b0602d-43b1-4714-b2c1-a202b6c49c0b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.839915698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746232839891154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1b0602d-43b1-4714-b2c1-a202b6c49c0b name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.840745467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9430ab7-492e-490d-8e66-d96ffb49e879 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.840816894Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b9430ab7-492e-490d-8e66-d96ffb49e879 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.840952498Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b37b3836115cd1b1059f582c842e9ef109124d229b5bb1d1e12acd7a466c035,PodSandboxId:e84b0fbacafd63e050f150bf133b1281cb652421811847a1cf159a7e57c0f100,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723746180492877934,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hv42g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27546ead-98d1-4bc2-a85f-7ca0b28e8766,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6322ee37f74d4a951c402e42fbbbccdb2b577238f65ca4a15ce704bb00decb6,PodSandboxId:a1a12c9f3f2f9596e5617065a40d1aa3ec4a4e5797f584051d41130bddd1aedd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723746180022589426,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn6f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 08e84521-f0ab-4ba8-84ee-a0fb14e127cc,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38304578e0c6be2ff7f48b4f0bf819bec93f54d4e3c32991c30278f2844d58af,PodSandboxId:4f14cda7e99c821fe0b10d93650dc9ae154c736a1a21d45967798ac4fb0bd4a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723746165445708769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffd6553258b
a8681c1b642192202163,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb173355d7b6c9ab63e16aed3830981fddaf4af446337e96adf74e6dd4f249e,PodSandboxId:a3a49742d640938de40fe46bc8a9eb8d928667d5c46dd31ff64644be66b59c24,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723746165384451303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880bb4c02e2642c112cd398a891c7ac5,},Annotations:map[string]st
ring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7340f9776605f577e228cf560f9a56b5af878b863621f65f0945838e97f1d3,PodSandboxId:4806d785b90a02bdf3fcaf7b85ea783e4f84eadadee416e1b79ffd7a07f9d742,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723746165421147178,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff45c67b48194866b11b4af5592a1027,},Annotations:map[string]string{io.kubernetes
.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:664bae0dce755595eeba937dd25281ebc4c62deb7e4be92c7e335a964b41e9b9,PodSandboxId:462a311cd71b76a01d880be93a74639f46c5c53f10566d8a8f22ca2103f0c1fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723746165390910446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cfabc89ce77c2ab98c9f9faf29d46e4,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b9430ab7-492e-490d-8e66-d96ffb49e879 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.854556356Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=d7f7bf18-1f2e-44a6-998e-8d1debcc2010 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.854783556Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e19acbd5246961c261cb59f1e86303c2166ec8c443af7db73ec26a6f54b25c0c,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-bcl4c,Uid:17e5004e-fe5e-4ba0-aed4-f988c9cef31e,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1723746179912861510,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-bcl4c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 17e5004e-fe5e-4ba0-aed4-f988c9cef31e,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T18:22:59.590652208Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e84b0fbacafd63e050f150bf133b1281cb652421811847a1cf159a7e57c0f100,Metadata:&PodSandboxMetadata{Name:coredns-6f6b679f8f-hv42g,Uid:27546ead-98d1-4bc2-a85f-7ca0b28e8766,Namespace:kube-syst
em,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723746179872714415,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6f6b679f8f-hv42g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27546ead-98d1-4bc2-a85f-7ca0b28e8766,k8s-app: kube-dns,pod-template-hash: 6f6b679f8f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-08-15T18:22:59.549900935Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a1a12c9f3f2f9596e5617065a40d1aa3ec4a4e5797f584051d41130bddd1aedd,Metadata:&PodSandboxMetadata{Name:kube-proxy-rn6f2,Uid:08e84521-f0ab-4ba8-84ee-a0fb14e127cc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723746179699470216,Labels:map[string]string{controller-revision-hash: 5976bc5f75,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rn6f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 08e84521-f0ab-4ba8-84ee-a0fb14e127cc,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[
string]string{kubernetes.io/config.seen: 2024-08-15T18:22:59.389619688Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4f14cda7e99c821fe0b10d93650dc9ae154c736a1a21d45967798ac4fb0bd4a0,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-728850,Uid:8ffd6553258ba8681c1b642192202163,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723746165145518723,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffd6553258ba8681c1b642192202163,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8ffd6553258ba8681c1b642192202163,kubernetes.io/config.seen: 2024-08-15T18:22:44.631560276Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4806d785b90a02bdf3fcaf7b85ea783e4f84eadadee416e1b79ffd7a07f9d742,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-728850,Uid:ff45c67b48194866b11b4af5592a1027
,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723746165144617724,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff45c67b48194866b11b4af5592a1027,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.4:8443,kubernetes.io/config.hash: ff45c67b48194866b11b4af5592a1027,kubernetes.io/config.seen: 2024-08-15T18:22:44.631556851Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:462a311cd71b76a01d880be93a74639f46c5c53f10566d8a8f22ca2103f0c1fc,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-728850,Uid:0cfabc89ce77c2ab98c9f9faf29d46e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723746165142132707,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-con
troller-manager-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cfabc89ce77c2ab98c9f9faf29d46e4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0cfabc89ce77c2ab98c9f9faf29d46e4,kubernetes.io/config.seen: 2024-08-15T18:22:44.631558752Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a3a49742d640938de40fe46bc8a9eb8d928667d5c46dd31ff64644be66b59c24,Metadata:&PodSandboxMetadata{Name:etcd-pause-728850,Uid:880bb4c02e2642c112cd398a891c7ac5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1723746165111967151,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880bb4c02e2642c112cd398a891c7ac5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.4:2379,kubernetes.io/config.hash: 880bb4c02e2642c112cd398a891c7ac5,kubernetes.io/confi
g.seen: 2024-08-15T18:22:44.631551793Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d7f7bf18-1f2e-44a6-998e-8d1debcc2010 name=/runtime.v1.RuntimeService/ListPodSandbox
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.855799687Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1e132d1d-e70b-4f3a-b053-5428e49104d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.855883166Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1e132d1d-e70b-4f3a-b053-5428e49104d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.856037289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b37b3836115cd1b1059f582c842e9ef109124d229b5bb1d1e12acd7a466c035,PodSandboxId:e84b0fbacafd63e050f150bf133b1281cb652421811847a1cf159a7e57c0f100,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723746180492877934,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hv42g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27546ead-98d1-4bc2-a85f-7ca0b28e8766,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6322ee37f74d4a951c402e42fbbbccdb2b577238f65ca4a15ce704bb00decb6,PodSandboxId:a1a12c9f3f2f9596e5617065a40d1aa3ec4a4e5797f584051d41130bddd1aedd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723746180022589426,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn6f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 08e84521-f0ab-4ba8-84ee-a0fb14e127cc,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38304578e0c6be2ff7f48b4f0bf819bec93f54d4e3c32991c30278f2844d58af,PodSandboxId:4f14cda7e99c821fe0b10d93650dc9ae154c736a1a21d45967798ac4fb0bd4a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723746165445708769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffd6553258b
a8681c1b642192202163,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb173355d7b6c9ab63e16aed3830981fddaf4af446337e96adf74e6dd4f249e,PodSandboxId:a3a49742d640938de40fe46bc8a9eb8d928667d5c46dd31ff64644be66b59c24,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723746165384451303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880bb4c02e2642c112cd398a891c7ac5,},Annotations:map[string]st
ring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7340f9776605f577e228cf560f9a56b5af878b863621f65f0945838e97f1d3,PodSandboxId:4806d785b90a02bdf3fcaf7b85ea783e4f84eadadee416e1b79ffd7a07f9d742,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723746165421147178,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff45c67b48194866b11b4af5592a1027,},Annotations:map[string]string{io.kubernetes
.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:664bae0dce755595eeba937dd25281ebc4c62deb7e4be92c7e335a964b41e9b9,PodSandboxId:462a311cd71b76a01d880be93a74639f46c5c53f10566d8a8f22ca2103f0c1fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723746165390910446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cfabc89ce77c2ab98c9f9faf29d46e4,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1e132d1d-e70b-4f3a-b053-5428e49104d6 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.893531803Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d1f4aed-0ff3-45e7-954b-6c2bd48f7700 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.893665127Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d1f4aed-0ff3-45e7-954b-6c2bd48f7700 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.895014940Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=728aa130-ce70-4cdd-b343-557d183ce9f7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.895579978Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746232895554761,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=728aa130-ce70-4cdd-b343-557d183ce9f7 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.896246540Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e48c656e-c570-46b2-bc7c-4bc421ad49f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.896315531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e48c656e-c570-46b2-bc7c-4bc421ad49f5 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:23:52 pause-728850 crio[682]: time="2024-08-15 18:23:52.896451916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5b37b3836115cd1b1059f582c842e9ef109124d229b5bb1d1e12acd7a466c035,PodSandboxId:e84b0fbacafd63e050f150bf133b1281cb652421811847a1cf159a7e57c0f100,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723746180492877934,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-hv42g,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27546ead-98d1-4bc2-a85f-7ca0b28e8766,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6322ee37f74d4a951c402e42fbbbccdb2b577238f65ca4a15ce704bb00decb6,PodSandboxId:a1a12c9f3f2f9596e5617065a40d1aa3ec4a4e5797f584051d41130bddd1aedd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723746180022589426,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rn6f2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 08e84521-f0ab-4ba8-84ee-a0fb14e127cc,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38304578e0c6be2ff7f48b4f0bf819bec93f54d4e3c32991c30278f2844d58af,PodSandboxId:4f14cda7e99c821fe0b10d93650dc9ae154c736a1a21d45967798ac4fb0bd4a0,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723746165445708769,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ffd6553258b
a8681c1b642192202163,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aeb173355d7b6c9ab63e16aed3830981fddaf4af446337e96adf74e6dd4f249e,PodSandboxId:a3a49742d640938de40fe46bc8a9eb8d928667d5c46dd31ff64644be66b59c24,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723746165384451303,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 880bb4c02e2642c112cd398a891c7ac5,},Annotations:map[string]st
ring{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec7340f9776605f577e228cf560f9a56b5af878b863621f65f0945838e97f1d3,PodSandboxId:4806d785b90a02bdf3fcaf7b85ea783e4f84eadadee416e1b79ffd7a07f9d742,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723746165421147178,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ff45c67b48194866b11b4af5592a1027,},Annotations:map[string]string{io.kubernetes
.container.hash: f72d0944,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:664bae0dce755595eeba937dd25281ebc4c62deb7e4be92c7e335a964b41e9b9,PodSandboxId:462a311cd71b76a01d880be93a74639f46c5c53f10566d8a8f22ca2103f0c1fc,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723746165390910446,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-728850,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cfabc89ce77c2ab98c9f9faf29d46e4,},Annotations:map[string]string{io.k
ubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e48c656e-c570-46b2-bc7c-4bc421ad49f5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	5b37b3836115c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   52 seconds ago       Running             coredns                   0                   e84b0fbacafd6       coredns-6f6b679f8f-hv42g
	e6322ee37f74d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   52 seconds ago       Running             kube-proxy                0                   a1a12c9f3f2f9       kube-proxy-rn6f2
	38304578e0c6b       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   About a minute ago   Running             kube-scheduler            0                   4f14cda7e99c8       kube-scheduler-pause-728850
	ec7340f977660       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   About a minute ago   Running             kube-apiserver            0                   4806d785b90a0       kube-apiserver-pause-728850
	664bae0dce755       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   About a minute ago   Running             kube-controller-manager   0                   462a311cd71b7       kube-controller-manager-pause-728850
	aeb173355d7b6       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   About a minute ago   Running             etcd                      0                   a3a49742d6409       etcd-pause-728850
	
	
	==> coredns [5b37b3836115cd1b1059f582c842e9ef109124d229b5bb1d1e12acd7a466c035] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49954 - 62337 "HINFO IN 6695064833075609724.4592791033127252978. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017347043s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1010460098]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 18:23:00.848) (total time: 30002ms):
	Trace[1010460098]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:23:30.850)
	Trace[1010460098]: [30.002572853s] [30.002572853s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[520279124]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 18:23:00.850) (total time: 30000ms):
	Trace[520279124]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:23:30.851)
	Trace[520279124]: [30.000959971s] [30.000959971s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1573684280]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (15-Aug-2024 18:23:00.848) (total time: 30002ms):
	Trace[1573684280]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30002ms (18:23:30.851)
	Trace[1573684280]: [30.002597571s] [30.002597571s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               pause-728850
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-728850
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=pause-728850
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T18_22_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 18:22:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-728850
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:23:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 18:23:05 +0000   Thu, 15 Aug 2024 18:22:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 18:23:05 +0000   Thu, 15 Aug 2024 18:22:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 18:23:05 +0000   Thu, 15 Aug 2024 18:22:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 18:23:05 +0000   Thu, 15 Aug 2024 18:22:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.4
	  Hostname:    pause-728850
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 baa8bfb8eca442e5bbb8e05d2b65b1a4
	  System UUID:                baa8bfb8-eca4-42e5-bbb8-e05d2b65b1a4
	  Boot ID:                    776e781a-a3c0-48e9-8d3c-1222dd1ca95f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-hv42g                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     54s
	  kube-system                 etcd-pause-728850                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         59s
	  kube-system                 kube-apiserver-pause-728850             250m (12%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-pause-728850    200m (10%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-proxy-rn6f2                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-pause-728850             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 52s                kube-proxy       
	  Normal  Starting                 69s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node pause-728850 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node pause-728850 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     69s (x7 over 69s)  kubelet          Node pause-728850 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  69s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  59s                kubelet          Node pause-728850 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s                kubelet          Node pause-728850 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s                kubelet          Node pause-728850 status is now: NodeHasSufficientPID
	  Normal  NodeReady                58s                kubelet          Node pause-728850 status is now: NodeReady
	  Normal  RegisteredNode           55s                node-controller  Node pause-728850 event: Registered Node pause-728850 in Controller
	
	
	==> dmesg <==
	[Aug15 18:22] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049972] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040411] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.234895] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.723169] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +5.119065] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.977559] systemd-fstab-generator[599]: Ignoring "noauto" option for root device
	[  +0.063498] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068635] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.217438] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.123851] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +0.313559] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.268806] systemd-fstab-generator[765]: Ignoring "noauto" option for root device
	[  +0.059820] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.888570] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +0.995315] kauditd_printk_skb: 46 callbacks suppressed
	[  +9.584876] systemd-fstab-generator[1237]: Ignoring "noauto" option for root device
	[  +0.099082] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.329280] systemd-fstab-generator[1392]: Ignoring "noauto" option for root device
	[  +0.136092] kauditd_printk_skb: 21 callbacks suppressed
	[Aug15 18:23] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [aeb173355d7b6c9ab63e16aed3830981fddaf4af446337e96adf74e6dd4f249e] <==
	{"level":"info","ts":"2024-08-15T18:22:50.925986Z","caller":"traceutil/trace.go:171","msg":"trace[1823270941] transaction","detail":"{read_only:false; response_revision:74; number_of_response:1; }","duration":"1.137496779s","start":"2024-08-15T18:22:49.788470Z","end":"2024-08-15T18:22:50.925967Z","steps":["trace[1823270941] 'process raft request'  (duration: 677.624118ms)","trace[1823270941] 'compare'  (duration: 459.374836ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:22:50.927029Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:22:49.788455Z","time spent":"1.138529589s","remote":"127.0.0.1:39132","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5335,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-728850\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-728850\" value_size:5273 >> failure:<>"}
	{"level":"warn","ts":"2024-08-15T18:22:50.926160Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.13679507s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-08-15T18:22:50.927423Z","caller":"traceutil/trace.go:171","msg":"trace[1536408234] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:0; response_revision:74; }","duration":"1.138051774s","start":"2024-08-15T18:22:49.789348Z","end":"2024-08-15T18:22:50.927400Z","steps":["trace[1536408234] 'agreement among raft nodes before linearized reading'  (duration: 1.136668278s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:22:50.927464Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:22:49.789326Z","time spent":"1.138122779s","remote":"127.0.0.1:39310","response type":"/etcdserverpb.KV/Range","request count":0,"request size":49,"response count":0,"response size":27,"request content":"key:\"/registry/clusterroles/system:aggregate-to-view\" "}
	{"level":"info","ts":"2024-08-15T18:22:51.258833Z","caller":"traceutil/trace.go:171","msg":"trace[318645739] transaction","detail":"{read_only:false; response_revision:75; number_of_response:1; }","duration":"321.676626ms","start":"2024-08-15T18:22:50.937139Z","end":"2024-08-15T18:22:51.258816Z","steps":["trace[318645739] 'process raft request'  (duration: 246.350976ms)","trace[318645739] 'compare'  (duration: 75.062335ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:22:51.259676Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:22:50.937124Z","time spent":"322.494863ms","remote":"127.0.0.1:39310","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/clusterroles/cluster-admin\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/cluster-admin\" value_size:496 >> failure:<>"}
	{"level":"warn","ts":"2024-08-15T18:22:51.843805Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"451.032554ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11075918662629354218 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/pause-728850.17ebfa01017dbf12\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-728850.17ebfa01017dbf12\" value_size:544 lease:1852546625774578408 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2024-08-15T18:22:51.844021Z","caller":"traceutil/trace.go:171","msg":"trace[1001687043] linearizableReadLoop","detail":"{readStateIndex:81; appliedIndex:80; }","duration":"580.163977ms","start":"2024-08-15T18:22:51.263841Z","end":"2024-08-15T18:22:51.844005Z","steps":["trace[1001687043] 'read index received'  (duration: 128.777276ms)","trace[1001687043] 'applied index is now lower than readState.Index'  (duration: 451.385219ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T18:22:51.844141Z","caller":"traceutil/trace.go:171","msg":"trace[1958115088] transaction","detail":"{read_only:false; response_revision:76; number_of_response:1; }","duration":"583.071319ms","start":"2024-08-15T18:22:51.261050Z","end":"2024-08-15T18:22:51.844121Z","steps":["trace[1958115088] 'process raft request'  (duration: 131.664987ms)","trace[1958115088] 'compare'  (duration: 450.400156ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:22:51.844309Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"580.329761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:discovery\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-08-15T18:22:51.844351Z","caller":"traceutil/trace.go:171","msg":"trace[2000560358] range","detail":"{range_begin:/registry/clusterroles/system:discovery; range_end:; response_count:0; response_revision:76; }","duration":"580.508174ms","start":"2024-08-15T18:22:51.263836Z","end":"2024-08-15T18:22:51.844345Z","steps":["trace[2000560358] 'agreement among raft nodes before linearized reading'  (duration: 580.248927ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:22:51.844411Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:22:51.263788Z","time spent":"580.614731ms","remote":"127.0.0.1:39310","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":0,"response size":27,"request content":"key:\"/registry/clusterroles/system:discovery\" "}
	{"level":"warn","ts":"2024-08-15T18:22:51.844323Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:22:51.261029Z","time spent":"583.240021ms","remote":"127.0.0.1:39008","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":616,"response count":0,"response size":37,"request content":"compare:<target:MOD key:\"/registry/events/default/pause-728850.17ebfa01017dbf12\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/pause-728850.17ebfa01017dbf12\" value_size:544 lease:1852546625774578408 >> failure:<>"}
	{"level":"info","ts":"2024-08-15T18:22:51.998532Z","caller":"traceutil/trace.go:171","msg":"trace[1345750722] linearizableReadLoop","detail":"{readStateIndex:83; appliedIndex:81; }","duration":"147.26393ms","start":"2024-08-15T18:22:51.851254Z","end":"2024-08-15T18:22:51.998518Z","steps":["trace[1345750722] 'read index received'  (duration: 119.393451ms)","trace[1345750722] 'applied index is now lower than readState.Index'  (duration: 27.869932ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T18:22:51.998568Z","caller":"traceutil/trace.go:171","msg":"trace[1979554916] transaction","detail":"{read_only:false; response_revision:77; number_of_response:1; }","duration":"150.907744ms","start":"2024-08-15T18:22:51.847637Z","end":"2024-08-15T18:22:51.998545Z","steps":["trace[1979554916] 'process raft request'  (duration: 123.062207ms)","trace[1979554916] 'compare'  (duration: 27.562489ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T18:22:51.998751Z","caller":"traceutil/trace.go:171","msg":"trace[1900253208] transaction","detail":"{read_only:false; response_revision:78; number_of_response:1; }","duration":"151.100225ms","start":"2024-08-15T18:22:51.847641Z","end":"2024-08-15T18:22:51.998741Z","steps":["trace[1900253208] 'process raft request'  (duration: 150.819969ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:22:51.999061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.875395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-08-15T18:22:51.999153Z","caller":"traceutil/trace.go:171","msg":"trace[1235522] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:78; }","duration":"147.978168ms","start":"2024-08-15T18:22:51.851164Z","end":"2024-08-15T18:22:51.999142Z","steps":["trace[1235522] 'agreement among raft nodes before linearized reading'  (duration: 147.796793ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:22:52.210448Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.214273ms","expected-duration":"100ms","prefix":"","request":"header:<ID:11075918662629354230 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:basic-user\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:basic-user\" value_size:617 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2024-08-15T18:22:52.210517Z","caller":"traceutil/trace.go:171","msg":"trace[1787239345] linearizableReadLoop","detail":"{readStateIndex:88; appliedIndex:87; }","duration":"117.012516ms","start":"2024-08-15T18:22:52.093493Z","end":"2024-08-15T18:22:52.210505Z","steps":["trace[1787239345] 'read index received'  (duration: 8.634105ms)","trace[1787239345] 'applied index is now lower than readState.Index'  (duration: 108.377585ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:22:52.210608Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.115828ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/pause-728850.17ebfa0103d23290\" ","response":"range_response_count:1 size:682"}
	{"level":"info","ts":"2024-08-15T18:22:52.210630Z","caller":"traceutil/trace.go:171","msg":"trace[1492943679] range","detail":"{range_begin:/registry/events/default/pause-728850.17ebfa0103d23290; range_end:; response_count:1; response_revision:83; }","duration":"117.140064ms","start":"2024-08-15T18:22:52.093480Z","end":"2024-08-15T18:22:52.210620Z","steps":["trace[1492943679] 'agreement among raft nodes before linearized reading'  (duration: 117.057587ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:22:52.210937Z","caller":"traceutil/trace.go:171","msg":"trace[1500242731] transaction","detail":"{read_only:false; response_revision:83; number_of_response:1; }","duration":"138.017068ms","start":"2024-08-15T18:22:52.072850Z","end":"2024-08-15T18:22:52.210868Z","steps":["trace[1500242731] 'process raft request'  (duration: 29.376605ms)","trace[1500242731] 'compare'  (duration: 107.560522ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T18:22:53.280602Z","caller":"traceutil/trace.go:171","msg":"trace[2111900642] transaction","detail":"{read_only:false; response_revision:223; number_of_response:1; }","duration":"177.274354ms","start":"2024-08-15T18:22:53.103309Z","end":"2024-08-15T18:22:53.280584Z","steps":["trace[2111900642] 'process raft request'  (duration: 109.189762ms)","trace[2111900642] 'compare'  (duration: 67.843247ms)"],"step_count":2}
	
	
	==> kernel <==
	 18:23:53 up 1 min,  0 users,  load average: 0.41, 0.19, 0.07
	Linux pause-728850 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ec7340f9776605f577e228cf560f9a56b5af878b863621f65f0945838e97f1d3] <==
	I0815 18:22:48.750363       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0815 18:22:48.751422       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0815 18:22:48.751471       1 policy_source.go:224] refreshing policies
	I0815 18:22:48.757324       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0815 18:22:48.757479       1 aggregator.go:171] initial CRD sync complete...
	I0815 18:22:48.757526       1 autoregister_controller.go:144] Starting autoregister controller
	I0815 18:22:48.757535       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0815 18:22:48.757542       1 cache.go:39] Caches are synced for autoregister controller
	I0815 18:22:48.758412       1 controller.go:615] quota admission added evaluator for: namespaces
	I0815 18:22:48.934806       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 18:22:49.624872       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0815 18:22:49.782374       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0815 18:22:49.782411       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0815 18:22:53.083259       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 18:22:53.325423       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0815 18:22:53.458436       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0815 18:22:53.466474       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.72.4]
	I0815 18:22:53.467562       1 controller.go:615] quota admission added evaluator for: endpoints
	I0815 18:22:53.473152       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 18:22:53.663817       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0815 18:22:54.569460       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0815 18:22:54.588105       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0815 18:22:54.603379       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0815 18:22:59.364178       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0815 18:22:59.419844       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [664bae0dce755595eeba937dd25281ebc4c62deb7e4be92c7e335a964b41e9b9] <==
	I0815 18:22:58.511656       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0815 18:22:58.524577       1 shared_informer.go:320] Caches are synced for disruption
	I0815 18:22:58.536691       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 18:22:58.620497       1 shared_informer.go:320] Caches are synced for resource quota
	I0815 18:22:58.665459       1 shared_informer.go:320] Caches are synced for persistent volume
	I0815 18:22:59.098525       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 18:22:59.111587       1 shared_informer.go:320] Caches are synced for garbage collector
	I0815 18:22:59.111683       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0815 18:22:59.326571       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-728850"
	I0815 18:22:59.581966       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="153.635813ms"
	I0815 18:22:59.625621       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="43.579844ms"
	I0815 18:22:59.625708       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="57.269µs"
	I0815 18:22:59.625799       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="59.353µs"
	I0815 18:22:59.638840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="160.494µs"
	I0815 18:23:00.510662       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="63.717381ms"
	I0815 18:23:00.525698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="14.954039ms"
	I0815 18:23:00.525802       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="59.634µs"
	I0815 18:23:00.696008       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="49.669µs"
	I0815 18:23:01.711534       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="54.752µs"
	I0815 18:23:05.049160       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="pause-728850"
	I0815 18:23:11.084109       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="49.571µs"
	I0815 18:23:11.771052       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="73.302µs"
	I0815 18:23:11.778372       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="91.256µs"
	I0815 18:23:41.569104       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="21.645771ms"
	I0815 18:23:41.570765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="67.469µs"
	
	
	==> kube-proxy [e6322ee37f74d4a951c402e42fbbbccdb2b577238f65ca4a15ce704bb00decb6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 18:23:00.633771       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 18:23:00.671854       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.4"]
	E0815 18:23:00.671964       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 18:23:00.862303       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 18:23:00.862394       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 18:23:00.862437       1 server_linux.go:169] "Using iptables Proxier"
	I0815 18:23:00.865100       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 18:23:00.865490       1 server.go:483] "Version info" version="v1.31.0"
	I0815 18:23:00.865540       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:23:00.866917       1 config.go:197] "Starting service config controller"
	I0815 18:23:00.866988       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 18:23:00.867032       1 config.go:104] "Starting endpoint slice config controller"
	I0815 18:23:00.867049       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 18:23:00.869642       1 config.go:326] "Starting node config controller"
	I0815 18:23:00.869733       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 18:23:00.968138       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 18:23:00.968322       1 shared_informer.go:320] Caches are synced for service config
	I0815 18:23:00.969940       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [38304578e0c6be2ff7f48b4f0bf819bec93f54d4e3c32991c30278f2844d58af] <==
	W0815 18:22:51.526704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 18:22:51.527014       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:22:51.558974       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 18:22:51.559046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:22:51.733668       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 18:22:51.733870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 18:22:52.073369       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 18:22:52.073523       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:22:52.234054       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 18:22:52.234124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:22:52.396768       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 18:22:52.398651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 18:22:52.655366       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 18:22:52.655436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:22:52.659789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 18:22:52.660147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:22:52.696339       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 18:22:52.696411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:22:52.713246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 18:22:52.713310       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 18:22:52.786145       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 18:22:52.786322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:22:53.189070       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 18:22:53.189143       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0815 18:22:57.381410       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 18:23:00 pause-728850 kubelet[1244]: I0815 18:23:00.782420    1244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rn6f2" podStartSLOduration=1.782399928 podStartE2EDuration="1.782399928s" podCreationTimestamp="2024-08-15 18:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-15 18:23:00.758113845 +0000 UTC m=+6.357265635" watchObservedRunningTime="2024-08-15 18:23:00.782399928 +0000 UTC m=+6.381551716"
	Aug 15 18:23:01 pause-728850 kubelet[1244]: I0815 18:23:01.726243    1244 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-bcl4c" podStartSLOduration=2.72617753 podStartE2EDuration="2.72617753s" podCreationTimestamp="2024-08-15 18:22:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-15 18:23:01.710984023 +0000 UTC m=+7.310135816" watchObservedRunningTime="2024-08-15 18:23:01.72617753 +0000 UTC m=+7.325329321"
	Aug 15 18:23:04 pause-728850 kubelet[1244]: E0815 18:23:04.628634    1244 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746184628291961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:23:04 pause-728850 kubelet[1244]: E0815 18:23:04.629013    1244 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746184628291961,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:23:05 pause-728850 kubelet[1244]: I0815 18:23:05.033819    1244 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 15 18:23:05 pause-728850 kubelet[1244]: I0815 18:23:05.035017    1244 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 15 18:23:11 pause-728850 kubelet[1244]: I0815 18:23:11.109036    1244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg5r2\" (UniqueName: \"kubernetes.io/projected/17e5004e-fe5e-4ba0-aed4-f988c9cef31e-kube-api-access-xg5r2\") pod \"17e5004e-fe5e-4ba0-aed4-f988c9cef31e\" (UID: \"17e5004e-fe5e-4ba0-aed4-f988c9cef31e\") "
	Aug 15 18:23:11 pause-728850 kubelet[1244]: I0815 18:23:11.109123    1244 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17e5004e-fe5e-4ba0-aed4-f988c9cef31e-config-volume\") pod \"17e5004e-fe5e-4ba0-aed4-f988c9cef31e\" (UID: \"17e5004e-fe5e-4ba0-aed4-f988c9cef31e\") "
	Aug 15 18:23:11 pause-728850 kubelet[1244]: I0815 18:23:11.109727    1244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17e5004e-fe5e-4ba0-aed4-f988c9cef31e-config-volume" (OuterVolumeSpecName: "config-volume") pod "17e5004e-fe5e-4ba0-aed4-f988c9cef31e" (UID: "17e5004e-fe5e-4ba0-aed4-f988c9cef31e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 15 18:23:11 pause-728850 kubelet[1244]: I0815 18:23:11.115743    1244 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e5004e-fe5e-4ba0-aed4-f988c9cef31e-kube-api-access-xg5r2" (OuterVolumeSpecName: "kube-api-access-xg5r2") pod "17e5004e-fe5e-4ba0-aed4-f988c9cef31e" (UID: "17e5004e-fe5e-4ba0-aed4-f988c9cef31e"). InnerVolumeSpecName "kube-api-access-xg5r2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 15 18:23:11 pause-728850 kubelet[1244]: I0815 18:23:11.210043    1244 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xg5r2\" (UniqueName: \"kubernetes.io/projected/17e5004e-fe5e-4ba0-aed4-f988c9cef31e-kube-api-access-xg5r2\") on node \"pause-728850\" DevicePath \"\""
	Aug 15 18:23:11 pause-728850 kubelet[1244]: I0815 18:23:11.210098    1244 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17e5004e-fe5e-4ba0-aed4-f988c9cef31e-config-volume\") on node \"pause-728850\" DevicePath \"\""
	Aug 15 18:23:11 pause-728850 kubelet[1244]: I0815 18:23:11.720420    1244 scope.go:117] "RemoveContainer" containerID="2fd6bd3130836903d69cac68a6320b349ec5191cf05822a41daff49d9ea2449d"
	Aug 15 18:23:11 pause-728850 kubelet[1244]: I0815 18:23:11.760091    1244 scope.go:117] "RemoveContainer" containerID="2fd6bd3130836903d69cac68a6320b349ec5191cf05822a41daff49d9ea2449d"
	Aug 15 18:23:11 pause-728850 kubelet[1244]: E0815 18:23:11.761074    1244 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd6bd3130836903d69cac68a6320b349ec5191cf05822a41daff49d9ea2449d\": container with ID starting with 2fd6bd3130836903d69cac68a6320b349ec5191cf05822a41daff49d9ea2449d not found: ID does not exist" containerID="2fd6bd3130836903d69cac68a6320b349ec5191cf05822a41daff49d9ea2449d"
	Aug 15 18:23:11 pause-728850 kubelet[1244]: I0815 18:23:11.761332    1244 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd6bd3130836903d69cac68a6320b349ec5191cf05822a41daff49d9ea2449d"} err="failed to get container status \"2fd6bd3130836903d69cac68a6320b349ec5191cf05822a41daff49d9ea2449d\": rpc error: code = NotFound desc = could not find container \"2fd6bd3130836903d69cac68a6320b349ec5191cf05822a41daff49d9ea2449d\": container with ID starting with 2fd6bd3130836903d69cac68a6320b349ec5191cf05822a41daff49d9ea2449d not found: ID does not exist"
	Aug 15 18:23:12 pause-728850 kubelet[1244]: I0815 18:23:12.615477    1244 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e5004e-fe5e-4ba0-aed4-f988c9cef31e" path="/var/lib/kubelet/pods/17e5004e-fe5e-4ba0-aed4-f988c9cef31e/volumes"
	Aug 15 18:23:14 pause-728850 kubelet[1244]: E0815 18:23:14.630679    1244 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746194630430357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:23:14 pause-728850 kubelet[1244]: E0815 18:23:14.630761    1244 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746194630430357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:23:24 pause-728850 kubelet[1244]: E0815 18:23:24.634092    1244 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746204631985857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:23:24 pause-728850 kubelet[1244]: E0815 18:23:24.635330    1244 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746204631985857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:23:34 pause-728850 kubelet[1244]: E0815 18:23:34.636833    1244 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746214636501053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:23:34 pause-728850 kubelet[1244]: E0815 18:23:34.636855    1244 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746214636501053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:23:44 pause-728850 kubelet[1244]: E0815 18:23:44.638095    1244 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746224637879614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:23:44 pause-728850 kubelet[1244]: E0815 18:23:44.638139    1244 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723746224637879614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125204,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-728850 -n pause-728850
helpers_test.go:261: (dbg) Run:  kubectl --context pause-728850 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (10.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (297.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-278865 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0815 18:26:15.296171   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-278865 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m56.983648251s)

                                                
                                                
-- stdout --
	* [old-k8s-version-278865] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-278865" primary control-plane node in "old-k8s-version-278865" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 18:25:53.788893   64368 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:25:53.788979   64368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:25:53.788984   64368 out.go:358] Setting ErrFile to fd 2...
	I0815 18:25:53.788989   64368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:25:53.789159   64368 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:25:53.789706   64368 out.go:352] Setting JSON to false
	I0815 18:25:53.790584   64368 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7700,"bootTime":1723738654,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:25:53.790636   64368 start.go:139] virtualization: kvm guest
	I0815 18:25:53.792750   64368 out.go:177] * [old-k8s-version-278865] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:25:53.794152   64368 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:25:53.794189   64368 notify.go:220] Checking for updates...
	I0815 18:25:53.796838   64368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:25:53.798248   64368 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:25:53.799586   64368 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:25:53.800816   64368 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:25:53.802080   64368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:25:53.803849   64368 config.go:182] Loaded profile config "cert-expiration-003860": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:25:53.803936   64368 config.go:182] Loaded profile config "kubernetes-upgrade-729203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:25:53.804016   64368 config.go:182] Loaded profile config "stopped-upgrade-498665": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0815 18:25:53.804083   64368 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:25:53.839002   64368 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 18:25:53.840361   64368 start.go:297] selected driver: kvm2
	I0815 18:25:53.840381   64368 start.go:901] validating driver "kvm2" against <nil>
	I0815 18:25:53.840392   64368 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:25:53.841058   64368 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:25:53.841172   64368 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:25:53.855474   64368 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:25:53.855512   64368 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 18:25:53.855705   64368 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:25:53.855758   64368 cni.go:84] Creating CNI manager for ""
	I0815 18:25:53.855770   64368 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:25:53.855781   64368 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 18:25:53.855825   64368 start.go:340] cluster config:
	{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:25:53.855912   64368 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:25:53.857536   64368 out.go:177] * Starting "old-k8s-version-278865" primary control-plane node in "old-k8s-version-278865" cluster
	I0815 18:25:53.858613   64368 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:25:53.858644   64368 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:25:53.858654   64368 cache.go:56] Caching tarball of preloaded images
	I0815 18:25:53.858717   64368 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:25:53.858726   64368 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0815 18:25:53.858801   64368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:25:53.858816   64368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json: {Name:mkf6d362ba564d7b775ae42e63037d279f71b39d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:25:53.858933   64368 start.go:360] acquireMachinesLock for old-k8s-version-278865: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:26:19.737924   64368 start.go:364] duration metric: took 25.878924414s to acquireMachinesLock for "old-k8s-version-278865"
	I0815 18:26:19.738027   64368 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:26:19.738173   64368 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 18:26:19.740156   64368 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0815 18:26:19.740389   64368 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:26:19.740440   64368 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:26:19.757943   64368 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45725
	I0815 18:26:19.758472   64368 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:26:19.759094   64368 main.go:141] libmachine: Using API Version  1
	I0815 18:26:19.759120   64368 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:26:19.759415   64368 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:26:19.759589   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:26:19.759716   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:19.759845   64368 start.go:159] libmachine.API.Create for "old-k8s-version-278865" (driver="kvm2")
	I0815 18:26:19.759884   64368 client.go:168] LocalClient.Create starting
	I0815 18:26:19.759929   64368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem
	I0815 18:26:19.759966   64368 main.go:141] libmachine: Decoding PEM data...
	I0815 18:26:19.759987   64368 main.go:141] libmachine: Parsing certificate...
	I0815 18:26:19.760051   64368 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem
	I0815 18:26:19.760079   64368 main.go:141] libmachine: Decoding PEM data...
	I0815 18:26:19.760095   64368 main.go:141] libmachine: Parsing certificate...
	I0815 18:26:19.760130   64368 main.go:141] libmachine: Running pre-create checks...
	I0815 18:26:19.760143   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .PreCreateCheck
	I0815 18:26:19.760524   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetConfigRaw
	I0815 18:26:19.760951   64368 main.go:141] libmachine: Creating machine...
	I0815 18:26:19.760975   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .Create
	I0815 18:26:19.761190   64368 main.go:141] libmachine: (old-k8s-version-278865) Creating KVM machine...
	I0815 18:26:19.762390   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found existing default KVM network
	I0815 18:26:19.764362   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:19.764177   64609 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014d20}
	I0815 18:26:19.764390   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | created network xml: 
	I0815 18:26:19.764432   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | <network>
	I0815 18:26:19.764467   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG |   <name>mk-old-k8s-version-278865</name>
	I0815 18:26:19.764480   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG |   <dns enable='no'/>
	I0815 18:26:19.764504   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG |   
	I0815 18:26:19.764547   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 18:26:19.764569   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG |     <dhcp>
	I0815 18:26:19.764582   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 18:26:19.764599   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG |     </dhcp>
	I0815 18:26:19.764613   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG |   </ip>
	I0815 18:26:19.764620   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG |   
	I0815 18:26:19.764629   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | </network>
	I0815 18:26:19.764639   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | 
	I0815 18:26:19.770133   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | trying to create private KVM network mk-old-k8s-version-278865 192.168.39.0/24...
	I0815 18:26:19.840465   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | private KVM network mk-old-k8s-version-278865 192.168.39.0/24 created
	I0815 18:26:19.840509   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:19.840444   64609 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:26:19.840523   64368 main.go:141] libmachine: (old-k8s-version-278865) Setting up store path in /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865 ...
	I0815 18:26:19.840541   64368 main.go:141] libmachine: (old-k8s-version-278865) Building disk image from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 18:26:19.840614   64368 main.go:141] libmachine: (old-k8s-version-278865) Downloading /home/jenkins/minikube-integration/19450-13013/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 18:26:20.105974   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:20.105846   64609 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa...
	I0815 18:26:20.292417   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:20.292267   64609 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/old-k8s-version-278865.rawdisk...
	I0815 18:26:20.292451   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Writing magic tar header
	I0815 18:26:20.292470   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Writing SSH key tar header
	I0815 18:26:20.292484   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:20.292386   64609 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865 ...
	I0815 18:26:20.292587   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865
	I0815 18:26:20.292647   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines
	I0815 18:26:20.292665   64368 main.go:141] libmachine: (old-k8s-version-278865) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865 (perms=drwx------)
	I0815 18:26:20.292676   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:26:20.292690   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013
	I0815 18:26:20.292705   64368 main.go:141] libmachine: (old-k8s-version-278865) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines (perms=drwxr-xr-x)
	I0815 18:26:20.292715   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 18:26:20.292728   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Checking permissions on dir: /home/jenkins
	I0815 18:26:20.292739   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Checking permissions on dir: /home
	I0815 18:26:20.292767   64368 main.go:141] libmachine: (old-k8s-version-278865) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube (perms=drwxr-xr-x)
	I0815 18:26:20.292795   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Skipping /home - not owner
	I0815 18:26:20.292807   64368 main.go:141] libmachine: (old-k8s-version-278865) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013 (perms=drwxrwxr-x)
	I0815 18:26:20.292822   64368 main.go:141] libmachine: (old-k8s-version-278865) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 18:26:20.292835   64368 main.go:141] libmachine: (old-k8s-version-278865) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 18:26:20.292850   64368 main.go:141] libmachine: (old-k8s-version-278865) Creating domain...
	I0815 18:26:20.294096   64368 main.go:141] libmachine: (old-k8s-version-278865) define libvirt domain using xml: 
	I0815 18:26:20.294114   64368 main.go:141] libmachine: (old-k8s-version-278865) <domain type='kvm'>
	I0815 18:26:20.294124   64368 main.go:141] libmachine: (old-k8s-version-278865)   <name>old-k8s-version-278865</name>
	I0815 18:26:20.294132   64368 main.go:141] libmachine: (old-k8s-version-278865)   <memory unit='MiB'>2200</memory>
	I0815 18:26:20.294143   64368 main.go:141] libmachine: (old-k8s-version-278865)   <vcpu>2</vcpu>
	I0815 18:26:20.294154   64368 main.go:141] libmachine: (old-k8s-version-278865)   <features>
	I0815 18:26:20.294173   64368 main.go:141] libmachine: (old-k8s-version-278865)     <acpi/>
	I0815 18:26:20.294185   64368 main.go:141] libmachine: (old-k8s-version-278865)     <apic/>
	I0815 18:26:20.294194   64368 main.go:141] libmachine: (old-k8s-version-278865)     <pae/>
	I0815 18:26:20.294215   64368 main.go:141] libmachine: (old-k8s-version-278865)     
	I0815 18:26:20.294228   64368 main.go:141] libmachine: (old-k8s-version-278865)   </features>
	I0815 18:26:20.294239   64368 main.go:141] libmachine: (old-k8s-version-278865)   <cpu mode='host-passthrough'>
	I0815 18:26:20.294248   64368 main.go:141] libmachine: (old-k8s-version-278865)   
	I0815 18:26:20.294257   64368 main.go:141] libmachine: (old-k8s-version-278865)   </cpu>
	I0815 18:26:20.294287   64368 main.go:141] libmachine: (old-k8s-version-278865)   <os>
	I0815 18:26:20.294301   64368 main.go:141] libmachine: (old-k8s-version-278865)     <type>hvm</type>
	I0815 18:26:20.294314   64368 main.go:141] libmachine: (old-k8s-version-278865)     <boot dev='cdrom'/>
	I0815 18:26:20.294325   64368 main.go:141] libmachine: (old-k8s-version-278865)     <boot dev='hd'/>
	I0815 18:26:20.294337   64368 main.go:141] libmachine: (old-k8s-version-278865)     <bootmenu enable='no'/>
	I0815 18:26:20.294346   64368 main.go:141] libmachine: (old-k8s-version-278865)   </os>
	I0815 18:26:20.294355   64368 main.go:141] libmachine: (old-k8s-version-278865)   <devices>
	I0815 18:26:20.294373   64368 main.go:141] libmachine: (old-k8s-version-278865)     <disk type='file' device='cdrom'>
	I0815 18:26:20.294388   64368 main.go:141] libmachine: (old-k8s-version-278865)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/boot2docker.iso'/>
	I0815 18:26:20.294398   64368 main.go:141] libmachine: (old-k8s-version-278865)       <target dev='hdc' bus='scsi'/>
	I0815 18:26:20.294407   64368 main.go:141] libmachine: (old-k8s-version-278865)       <readonly/>
	I0815 18:26:20.294416   64368 main.go:141] libmachine: (old-k8s-version-278865)     </disk>
	I0815 18:26:20.294425   64368 main.go:141] libmachine: (old-k8s-version-278865)     <disk type='file' device='disk'>
	I0815 18:26:20.294438   64368 main.go:141] libmachine: (old-k8s-version-278865)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 18:26:20.294456   64368 main.go:141] libmachine: (old-k8s-version-278865)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/old-k8s-version-278865.rawdisk'/>
	I0815 18:26:20.294466   64368 main.go:141] libmachine: (old-k8s-version-278865)       <target dev='hda' bus='virtio'/>
	I0815 18:26:20.294479   64368 main.go:141] libmachine: (old-k8s-version-278865)     </disk>
	I0815 18:26:20.294494   64368 main.go:141] libmachine: (old-k8s-version-278865)     <interface type='network'>
	I0815 18:26:20.294506   64368 main.go:141] libmachine: (old-k8s-version-278865)       <source network='mk-old-k8s-version-278865'/>
	I0815 18:26:20.294517   64368 main.go:141] libmachine: (old-k8s-version-278865)       <model type='virtio'/>
	I0815 18:26:20.294549   64368 main.go:141] libmachine: (old-k8s-version-278865)     </interface>
	I0815 18:26:20.294565   64368 main.go:141] libmachine: (old-k8s-version-278865)     <interface type='network'>
	I0815 18:26:20.294579   64368 main.go:141] libmachine: (old-k8s-version-278865)       <source network='default'/>
	I0815 18:26:20.294589   64368 main.go:141] libmachine: (old-k8s-version-278865)       <model type='virtio'/>
	I0815 18:26:20.294600   64368 main.go:141] libmachine: (old-k8s-version-278865)     </interface>
	I0815 18:26:20.294611   64368 main.go:141] libmachine: (old-k8s-version-278865)     <serial type='pty'>
	I0815 18:26:20.294620   64368 main.go:141] libmachine: (old-k8s-version-278865)       <target port='0'/>
	I0815 18:26:20.294630   64368 main.go:141] libmachine: (old-k8s-version-278865)     </serial>
	I0815 18:26:20.294638   64368 main.go:141] libmachine: (old-k8s-version-278865)     <console type='pty'>
	I0815 18:26:20.294649   64368 main.go:141] libmachine: (old-k8s-version-278865)       <target type='serial' port='0'/>
	I0815 18:26:20.294660   64368 main.go:141] libmachine: (old-k8s-version-278865)     </console>
	I0815 18:26:20.294670   64368 main.go:141] libmachine: (old-k8s-version-278865)     <rng model='virtio'>
	I0815 18:26:20.294680   64368 main.go:141] libmachine: (old-k8s-version-278865)       <backend model='random'>/dev/random</backend>
	I0815 18:26:20.294690   64368 main.go:141] libmachine: (old-k8s-version-278865)     </rng>
	I0815 18:26:20.294698   64368 main.go:141] libmachine: (old-k8s-version-278865)     
	I0815 18:26:20.294708   64368 main.go:141] libmachine: (old-k8s-version-278865)     
	I0815 18:26:20.294716   64368 main.go:141] libmachine: (old-k8s-version-278865)   </devices>
	I0815 18:26:20.294725   64368 main.go:141] libmachine: (old-k8s-version-278865) </domain>
	I0815 18:26:20.294736   64368 main.go:141] libmachine: (old-k8s-version-278865) 
	I0815 18:26:20.303255   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:43:52:45 in network default
	I0815 18:26:20.304153   64368 main.go:141] libmachine: (old-k8s-version-278865) Ensuring networks are active...
	I0815 18:26:20.304183   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:20.305178   64368 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network default is active
	I0815 18:26:20.305666   64368 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network mk-old-k8s-version-278865 is active
	I0815 18:26:20.306338   64368 main.go:141] libmachine: (old-k8s-version-278865) Getting domain xml...
	I0815 18:26:20.307218   64368 main.go:141] libmachine: (old-k8s-version-278865) Creating domain...
	I0815 18:26:21.786017   64368 main.go:141] libmachine: (old-k8s-version-278865) Waiting to get IP...
	I0815 18:26:21.787235   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:21.787739   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:21.787766   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:21.787719   64609 retry.go:31] will retry after 215.934825ms: waiting for machine to come up
	I0815 18:26:22.005367   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:22.006058   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:22.006083   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:22.005960   64609 retry.go:31] will retry after 352.288336ms: waiting for machine to come up
	I0815 18:26:22.359585   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:22.360116   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:22.360151   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:22.360039   64609 retry.go:31] will retry after 339.814419ms: waiting for machine to come up
	I0815 18:26:22.701729   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:22.702318   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:22.702340   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:22.702223   64609 retry.go:31] will retry after 480.397412ms: waiting for machine to come up
	I0815 18:26:23.183922   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:23.184448   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:23.184476   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:23.184406   64609 retry.go:31] will retry after 591.567415ms: waiting for machine to come up
	I0815 18:26:23.777218   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:23.777753   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:23.777799   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:23.777713   64609 retry.go:31] will retry after 855.993413ms: waiting for machine to come up
	I0815 18:26:24.635176   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:24.635663   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:24.635706   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:24.635619   64609 retry.go:31] will retry after 989.25383ms: waiting for machine to come up
	I0815 18:26:25.626811   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:25.627281   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:25.627304   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:25.627243   64609 retry.go:31] will retry after 1.243107723s: waiting for machine to come up
	I0815 18:26:26.872518   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:26.872998   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:26.873024   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:26.872953   64609 retry.go:31] will retry after 1.446663155s: waiting for machine to come up
	I0815 18:26:28.321430   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:28.321974   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:28.322003   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:28.321918   64609 retry.go:31] will retry after 1.709476148s: waiting for machine to come up
	I0815 18:26:30.032912   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:30.033435   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:30.033463   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:30.033380   64609 retry.go:31] will retry after 2.467536976s: waiting for machine to come up
	I0815 18:26:32.502319   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:32.502954   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:32.503016   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:32.502925   64609 retry.go:31] will retry after 2.261157367s: waiting for machine to come up
	I0815 18:26:34.765746   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:34.766202   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:34.766222   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:34.766170   64609 retry.go:31] will retry after 4.21053318s: waiting for machine to come up
	I0815 18:26:38.977877   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:38.978302   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:26:38.978327   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:26:38.978262   64609 retry.go:31] will retry after 3.959621261s: waiting for machine to come up
	I0815 18:26:42.941583   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:42.942026   64368 main.go:141] libmachine: (old-k8s-version-278865) Found IP for machine: 192.168.39.89
	I0815 18:26:42.942043   64368 main.go:141] libmachine: (old-k8s-version-278865) Reserving static IP address...
	I0815 18:26:42.942057   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has current primary IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:42.942578   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"} in network mk-old-k8s-version-278865
	I0815 18:26:43.020225   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Getting to WaitForSSH function...
	I0815 18:26:43.020252   64368 main.go:141] libmachine: (old-k8s-version-278865) Reserved static IP address: 192.168.39.89
	I0815 18:26:43.020265   64368 main.go:141] libmachine: (old-k8s-version-278865) Waiting for SSH to be available...
	I0815 18:26:43.023222   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.023781   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.023806   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.024005   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH client type: external
	I0815 18:26:43.024046   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa (-rw-------)
	I0815 18:26:43.024095   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:26:43.024110   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | About to run SSH command:
	I0815 18:26:43.024130   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | exit 0
	I0815 18:26:43.148744   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | SSH cmd err, output: <nil>: 
	I0815 18:26:43.148999   64368 main.go:141] libmachine: (old-k8s-version-278865) KVM machine creation complete!
	I0815 18:26:43.149424   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetConfigRaw
	I0815 18:26:43.149994   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:43.150217   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:43.150431   64368 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 18:26:43.150455   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetState
	I0815 18:26:43.151866   64368 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 18:26:43.151888   64368 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 18:26:43.151895   64368 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 18:26:43.151904   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:43.154259   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.154571   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.154603   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.154773   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:43.154949   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.155125   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.155271   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:43.155445   64368 main.go:141] libmachine: Using SSH client type: native
	I0815 18:26:43.155728   64368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:26:43.155747   64368 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 18:26:43.260226   64368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:26:43.260249   64368 main.go:141] libmachine: Detecting the provisioner...
	I0815 18:26:43.260257   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:43.263094   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.263460   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.263489   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.263653   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:43.263823   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.264103   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.264297   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:43.264463   64368 main.go:141] libmachine: Using SSH client type: native
	I0815 18:26:43.264665   64368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:26:43.264680   64368 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 18:26:43.369481   64368 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 18:26:43.369606   64368 main.go:141] libmachine: found compatible host: buildroot
	I0815 18:26:43.369619   64368 main.go:141] libmachine: Provisioning with buildroot...
	I0815 18:26:43.369631   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:26:43.369858   64368 buildroot.go:166] provisioning hostname "old-k8s-version-278865"
	I0815 18:26:43.369885   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:26:43.370058   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:43.372749   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.373121   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.373148   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.373271   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:43.373431   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.373535   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.373632   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:43.373819   64368 main.go:141] libmachine: Using SSH client type: native
	I0815 18:26:43.373984   64368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:26:43.373996   64368 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-278865 && echo "old-k8s-version-278865" | sudo tee /etc/hostname
	I0815 18:26:43.495620   64368 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-278865
	
	I0815 18:26:43.495643   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:43.498687   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.499012   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.499051   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.499196   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:43.499359   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.499523   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:43.499658   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:43.499819   64368 main.go:141] libmachine: Using SSH client type: native
	I0815 18:26:43.500043   64368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:26:43.500070   64368 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-278865' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-278865/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-278865' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:26:43.613817   64368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:26:43.613850   64368 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:26:43.613884   64368 buildroot.go:174] setting up certificates
	I0815 18:26:43.613894   64368 provision.go:84] configureAuth start
	I0815 18:26:43.613912   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:26:43.614152   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:26:43.616903   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.617304   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.617338   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.617471   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:43.619662   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.619962   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:43.619999   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:43.620141   64368 provision.go:143] copyHostCerts
	I0815 18:26:43.620197   64368 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:26:43.620215   64368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:26:43.620273   64368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:26:43.620445   64368 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:26:43.620455   64368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:26:43.620479   64368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:26:43.620582   64368 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:26:43.620590   64368 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:26:43.620609   64368 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:26:43.620668   64368 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-278865 san=[127.0.0.1 192.168.39.89 localhost minikube old-k8s-version-278865]
	I0815 18:26:44.187517   64368 provision.go:177] copyRemoteCerts
	I0815 18:26:44.187575   64368 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:26:44.187598   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:44.189998   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.190293   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.190322   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.190466   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:44.190712   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.190904   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:44.191064   64368 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:26:44.275738   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:26:44.298801   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:26:44.322708   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 18:26:44.345664   64368 provision.go:87] duration metric: took 731.75755ms to configureAuth
	I0815 18:26:44.345693   64368 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:26:44.345879   64368 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:26:44.345951   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:44.348519   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.348878   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.348897   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.349049   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:44.349241   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.349411   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.349536   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:44.349711   64368 main.go:141] libmachine: Using SSH client type: native
	I0815 18:26:44.349910   64368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:26:44.349934   64368 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:26:44.618371   64368 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:26:44.618399   64368 main.go:141] libmachine: Checking connection to Docker...
	I0815 18:26:44.618408   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetURL
	I0815 18:26:44.619722   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using libvirt version 6000000
	I0815 18:26:44.621759   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.622141   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.622173   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.622278   64368 main.go:141] libmachine: Docker is up and running!
	I0815 18:26:44.622299   64368 main.go:141] libmachine: Reticulating splines...
	I0815 18:26:44.622306   64368 client.go:171] duration metric: took 24.862411526s to LocalClient.Create
	I0815 18:26:44.622336   64368 start.go:167] duration metric: took 24.862501737s to libmachine.API.Create "old-k8s-version-278865"
	I0815 18:26:44.622345   64368 start.go:293] postStartSetup for "old-k8s-version-278865" (driver="kvm2")
	I0815 18:26:44.622354   64368 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:26:44.622372   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:44.622625   64368 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:26:44.622656   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:44.624769   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.625099   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.625126   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.625269   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:44.625451   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.625624   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:44.625791   64368 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:26:44.707742   64368 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:26:44.711983   64368 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:26:44.712010   64368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:26:44.712082   64368 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:26:44.712189   64368 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:26:44.712278   64368 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:26:44.721962   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:26:44.745826   64368 start.go:296] duration metric: took 123.470495ms for postStartSetup
	I0815 18:26:44.745872   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetConfigRaw
	I0815 18:26:44.746401   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:26:44.748801   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.749223   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.749245   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.749591   64368 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:26:44.749795   64368 start.go:128] duration metric: took 25.011607097s to createHost
	I0815 18:26:44.749819   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:44.752199   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.752643   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.752667   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.752833   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:44.753037   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.753188   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.753331   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:44.753492   64368 main.go:141] libmachine: Using SSH client type: native
	I0815 18:26:44.753656   64368 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:26:44.753675   64368 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:26:44.857401   64368 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723746404.831725234
	
	I0815 18:26:44.857433   64368 fix.go:216] guest clock: 1723746404.831725234
	I0815 18:26:44.857444   64368 fix.go:229] Guest: 2024-08-15 18:26:44.831725234 +0000 UTC Remote: 2024-08-15 18:26:44.749808719 +0000 UTC m=+50.995480451 (delta=81.916515ms)
	I0815 18:26:44.857483   64368 fix.go:200] guest clock delta is within tolerance: 81.916515ms
	I0815 18:26:44.857491   64368 start.go:83] releasing machines lock for "old-k8s-version-278865", held for 25.119510908s
	I0815 18:26:44.857518   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:44.857805   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:26:44.860347   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.860781   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.860810   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.860957   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:44.861677   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:44.861892   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:26:44.861979   64368 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:26:44.862025   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:44.862102   64368 ssh_runner.go:195] Run: cat /version.json
	I0815 18:26:44.862125   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:26:44.865296   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.865461   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.865682   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.865711   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.865863   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:44.865878   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:44.865906   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:44.866041   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.866055   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:26:44.866211   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:26:44.866215   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:44.866382   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:26:44.866383   64368 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:26:44.866560   64368 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:26:44.968627   64368 ssh_runner.go:195] Run: systemctl --version
	I0815 18:26:44.974805   64368 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:26:45.138147   64368 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:26:45.144313   64368 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:26:45.144377   64368 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:26:45.160056   64368 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:26:45.160081   64368 start.go:495] detecting cgroup driver to use...
	I0815 18:26:45.160158   64368 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:26:45.178269   64368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:26:45.193028   64368 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:26:45.193089   64368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:26:45.207037   64368 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:26:45.226663   64368 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:26:45.358346   64368 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:26:45.506003   64368 docker.go:233] disabling docker service ...
	I0815 18:26:45.506076   64368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:26:45.527696   64368 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:26:45.552024   64368 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:26:45.701984   64368 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:26:45.817868   64368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:26:45.832805   64368 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:26:45.854229   64368 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 18:26:45.854291   64368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:26:45.866467   64368 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:26:45.866524   64368 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:26:45.877712   64368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:26:45.890109   64368 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:26:45.902120   64368 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:26:45.914157   64368 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:26:45.925673   64368 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:26:45.925731   64368 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:26:45.946835   64368 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:26:45.958098   64368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:26:46.099956   64368 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:26:46.255366   64368 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:26:46.255441   64368 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:26:46.260549   64368 start.go:563] Will wait 60s for crictl version
	I0815 18:26:46.260611   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:46.264202   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:26:46.308555   64368 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:26:46.308649   64368 ssh_runner.go:195] Run: crio --version
	I0815 18:26:46.337148   64368 ssh_runner.go:195] Run: crio --version
	I0815 18:26:46.367888   64368 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 18:26:46.369308   64368 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:26:46.372288   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:46.372697   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:26:46.372730   64368 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:26:46.372922   64368 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:26:46.377064   64368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:26:46.391250   64368 kubeadm.go:883] updating cluster {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:26:46.391386   64368 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:26:46.391451   64368 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:26:46.433479   64368 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:26:46.433561   64368 ssh_runner.go:195] Run: which lz4
	I0815 18:26:46.438432   64368 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:26:46.442642   64368 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:26:46.442666   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 18:26:48.132589   64368 crio.go:462] duration metric: took 1.694196037s to copy over tarball
	I0815 18:26:48.132674   64368 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:26:50.690623   64368 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.557914585s)
	I0815 18:26:50.690667   64368 crio.go:469] duration metric: took 2.558038185s to extract the tarball
	I0815 18:26:50.690678   64368 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:26:50.734827   64368 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:26:50.781327   64368 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:26:50.781351   64368 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:26:50.781472   64368 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:50.781494   64368 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:50.781509   64368 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 18:26:50.781522   64368 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 18:26:50.781438   64368 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:50.781548   64368 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:50.781438   64368 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:50.781440   64368 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:26:50.783531   64368 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:26:50.783595   64368 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:50.783606   64368 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:50.783621   64368 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:50.783542   64368 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 18:26:50.783549   64368 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:50.783676   64368 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 18:26:50.783567   64368 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:50.949780   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:50.960111   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 18:26:51.000741   64368 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 18:26:51.000796   64368 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:51.000846   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.008063   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:51.012373   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:51.012446   64368 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 18:26:51.012518   64368 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 18:26:51.012561   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.059855   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:26:51.059964   64368 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 18:26:51.060036   64368 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:51.060080   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:51.060116   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.097244   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 18:26:51.113511   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:26:51.113550   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:51.113604   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:26:51.137186   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:51.155409   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:51.173814   64368 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 18:26:51.173862   64368 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 18:26:51.173912   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.228022   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:26:51.238588   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:51.238597   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 18:26:51.278941   64368 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 18:26:51.278981   64368 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:51.279032   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.283071   64368 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 18:26:51.283118   64368 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:51.283172   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.283177   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:26:51.307891   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 18:26:51.320952   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:26:51.321007   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:51.321008   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:51.325798   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:51.353707   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:26:51.416632   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 18:26:51.435034   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:51.435089   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:51.459584   64368 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 18:26:51.459633   64368 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:51.459664   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:26:51.459679   64368 ssh_runner.go:195] Run: which crictl
	I0815 18:26:51.513398   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:26:51.513440   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:26:51.530000   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:51.530050   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 18:26:51.586456   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 18:26:51.586462   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 18:26:51.597704   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:51.629930   64368 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:26:51.674431   64368 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 18:26:51.711237   64368 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:26:51.863355   64368 cache_images.go:92] duration metric: took 1.081984361s to LoadCachedImages
	W0815 18:26:51.863460   64368 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0815 18:26:51.863476   64368 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.20.0 crio true true} ...
	I0815 18:26:51.863612   64368 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-278865 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:26:51.863694   64368 ssh_runner.go:195] Run: crio config
	I0815 18:26:51.915393   64368 cni.go:84] Creating CNI manager for ""
	I0815 18:26:51.915466   64368 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:26:51.915482   64368 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:26:51.915509   64368 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-278865 NodeName:old-k8s-version-278865 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 18:26:51.915688   64368 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-278865"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:26:51.915758   64368 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 18:26:51.930339   64368 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:26:51.930423   64368 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:26:51.943602   64368 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 18:26:51.960687   64368 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:26:51.978511   64368 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 18:26:51.996372   64368 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I0815 18:26:52.000262   64368 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:26:52.013040   64368 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:26:52.138671   64368 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:26:52.157474   64368 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865 for IP: 192.168.39.89
	I0815 18:26:52.157508   64368 certs.go:194] generating shared ca certs ...
	I0815 18:26:52.157530   64368 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.157717   64368 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:26:52.157775   64368 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:26:52.157793   64368 certs.go:256] generating profile certs ...
	I0815 18:26:52.157870   64368 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.key
	I0815 18:26:52.157891   64368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt with IP's: []
	I0815 18:26:52.256783   64368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt ...
	I0815 18:26:52.256817   64368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: {Name:mk489eb0952cf53a915129fd288ab2fd07350a45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.257013   64368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.key ...
	I0815 18:26:52.257029   64368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.key: {Name:mke75b69e3e7b80a3685923312134ea2bd16478b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.257133   64368 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a
	I0815 18:26:52.257157   64368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt.b00e3c1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.89]
	I0815 18:26:52.514942   64368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt.b00e3c1a ...
	I0815 18:26:52.514971   64368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt.b00e3c1a: {Name:mk71bef92184d414517f936910c6b02a23ca09b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.515125   64368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a ...
	I0815 18:26:52.515138   64368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a: {Name:mk37306c17621e8d5ca942be7928f51bd17080bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.515212   64368 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt.b00e3c1a -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt
	I0815 18:26:52.515294   64368 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key
	I0815 18:26:52.515345   64368 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key
	I0815 18:26:52.515360   64368 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt with IP's: []
	I0815 18:26:52.594313   64368 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt ...
	I0815 18:26:52.594340   64368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt: {Name:mk9a46182f3609e6c7e843c3472924b6ae54f09a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.594530   64368 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key ...
	I0815 18:26:52.594547   64368 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key: {Name:mk88cd43667643e2f89a51eb09f9690b55733f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:26:52.594747   64368 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:26:52.594785   64368 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:26:52.594795   64368 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:26:52.594822   64368 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:26:52.594844   64368 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:26:52.594866   64368 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:26:52.594902   64368 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:26:52.595486   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:26:52.628555   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:26:52.656996   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:26:52.685663   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:26:52.710069   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 18:26:52.735575   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:26:52.759775   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:26:52.785418   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:26:52.809846   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:26:52.836180   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:26:52.862384   64368 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:26:52.890166   64368 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:26:52.908973   64368 ssh_runner.go:195] Run: openssl version
	I0815 18:26:52.915186   64368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:26:52.927217   64368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:26:52.932155   64368 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:26:52.932220   64368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:26:52.938718   64368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:26:52.950738   64368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:26:52.962656   64368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:26:52.967654   64368 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:26:52.967723   64368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:26:52.974702   64368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:26:52.993677   64368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:26:53.013657   64368 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:26:53.019869   64368 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:26:53.019936   64368 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:26:53.030433   64368 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:26:53.053149   64368 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:26:53.058133   64368 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 18:26:53.058206   64368 kubeadm.go:392] StartCluster: {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:26:53.058320   64368 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:26:53.058382   64368 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:26:53.102549   64368 cri.go:89] found id: ""
	I0815 18:26:53.102635   64368 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:26:53.113775   64368 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:26:53.124452   64368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:26:53.135251   64368 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:26:53.135277   64368 kubeadm.go:157] found existing configuration files:
	
	I0815 18:26:53.135332   64368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:26:53.145809   64368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:26:53.145875   64368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:26:53.155995   64368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:26:53.166137   64368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:26:53.166212   64368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:26:53.176057   64368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:26:53.187552   64368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:26:53.187628   64368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:26:53.199467   64368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:26:53.209707   64368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:26:53.209774   64368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:26:53.220118   64368 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:26:53.362998   64368 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:26:53.363125   64368 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:26:53.532702   64368 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:26:53.532876   64368 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:26:53.532987   64368 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:26:53.733641   64368 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:26:53.923997   64368 out.go:235]   - Generating certificates and keys ...
	I0815 18:26:53.924126   64368 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:26:53.924258   64368 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:26:53.945246   64368 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 18:26:54.037805   64368 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 18:26:54.212969   64368 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 18:26:54.386613   64368 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 18:26:54.622292   64368 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 18:26:54.622494   64368 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-278865] and IPs [192.168.39.89 127.0.0.1 ::1]
	I0815 18:26:54.764357   64368 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 18:26:54.764788   64368 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-278865] and IPs [192.168.39.89 127.0.0.1 ::1]
	I0815 18:26:54.952589   64368 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 18:26:55.225522   64368 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 18:26:55.650538   64368 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 18:26:55.650961   64368 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:26:55.762725   64368 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:26:56.074128   64368 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:26:56.262699   64368 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:26:56.452684   64368 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:26:56.468703   64368 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:26:56.469284   64368 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:26:56.469353   64368 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:26:56.630461   64368 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:26:56.632158   64368 out.go:235]   - Booting up control plane ...
	I0815 18:26:56.632301   64368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:26:56.640533   64368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:26:56.642342   64368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:26:56.644109   64368 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:26:56.650394   64368 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:27:36.643293   64368 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:27:36.643859   64368 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:27:36.644100   64368 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:27:41.644510   64368 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:27:41.644794   64368 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:27:51.644192   64368 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:27:51.644439   64368 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:28:11.643749   64368 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:28:11.644003   64368 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:28:51.645867   64368 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:28:51.646502   64368 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:28:51.646532   64368 kubeadm.go:310] 
	I0815 18:28:51.646620   64368 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:28:51.646710   64368 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:28:51.646720   64368 kubeadm.go:310] 
	I0815 18:28:51.646797   64368 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:28:51.646872   64368 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:28:51.647112   64368 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:28:51.647123   64368 kubeadm.go:310] 
	I0815 18:28:51.647356   64368 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:28:51.647436   64368 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:28:51.647509   64368 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:28:51.647521   64368 kubeadm.go:310] 
	I0815 18:28:51.647758   64368 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:28:51.647944   64368 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:28:51.647956   64368 kubeadm.go:310] 
	I0815 18:28:51.648183   64368 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:28:51.648389   64368 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:28:51.648576   64368 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:28:51.648817   64368 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:28:51.648839   64368 kubeadm.go:310] 
	I0815 18:28:51.649713   64368 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:28:51.649843   64368 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:28:51.649960   64368 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0815 18:28:51.650092   64368 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-278865] and IPs [192.168.39.89 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-278865] and IPs [192.168.39.89 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-278865] and IPs [192.168.39.89 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-278865] and IPs [192.168.39.89 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 18:28:51.650135   64368 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:28:53.070008   64368 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.419845953s)
	I0815 18:28:53.070101   64368 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:28:53.084149   64368 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:28:53.094153   64368 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:28:53.094170   64368 kubeadm.go:157] found existing configuration files:
	
	I0815 18:28:53.094212   64368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:28:53.103880   64368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:28:53.103946   64368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:28:53.114099   64368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:28:53.123508   64368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:28:53.123574   64368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:28:53.133572   64368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:28:53.142595   64368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:28:53.142648   64368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:28:53.152725   64368 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:28:53.162196   64368 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:28:53.162253   64368 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:28:53.171521   64368 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:28:53.249366   64368 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:28:53.257406   64368 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:28:53.413220   64368 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:28:53.413343   64368 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:28:53.413515   64368 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:28:53.612333   64368 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:28:53.614435   64368 out.go:235]   - Generating certificates and keys ...
	I0815 18:28:53.614548   64368 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:28:53.614635   64368 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:28:53.614765   64368 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:28:53.614853   64368 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:28:53.614947   64368 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:28:53.615019   64368 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:28:53.615325   64368 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:28:53.615699   64368 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:28:53.616149   64368 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:28:53.616868   64368 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:28:53.616982   64368 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:28:53.617071   64368 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:28:53.778844   64368 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:28:54.224573   64368 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:28:54.861636   64368 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:28:54.936684   64368 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:28:54.951036   64368 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:28:54.952440   64368 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:28:54.952544   64368 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:28:55.093981   64368 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:28:55.095841   64368 out.go:235]   - Booting up control plane ...
	I0815 18:28:55.095978   64368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:28:55.102399   64368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:28:55.102848   64368 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:28:55.105802   64368 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:28:55.108854   64368 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:29:35.111590   64368 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:29:35.111727   64368 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:29:35.111951   64368 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:29:40.112442   64368 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:29:40.112658   64368 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:29:50.113211   64368 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:29:50.113425   64368 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:30:10.112760   64368 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:30:10.112953   64368 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:30:50.113290   64368 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:30:50.113533   64368 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:30:50.113549   64368 kubeadm.go:310] 
	I0815 18:30:50.113608   64368 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:30:50.113673   64368 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:30:50.113684   64368 kubeadm.go:310] 
	I0815 18:30:50.113712   64368 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:30:50.113741   64368 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:30:50.113833   64368 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:30:50.113857   64368 kubeadm.go:310] 
	I0815 18:30:50.113996   64368 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:30:50.114049   64368 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:30:50.114095   64368 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:30:50.114105   64368 kubeadm.go:310] 
	I0815 18:30:50.114249   64368 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:30:50.114321   64368 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:30:50.114328   64368 kubeadm.go:310] 
	I0815 18:30:50.114426   64368 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:30:50.114547   64368 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:30:50.114663   64368 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:30:50.114764   64368 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:30:50.114773   64368 kubeadm.go:310] 
	I0815 18:30:50.115378   64368 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:30:50.115454   64368 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:30:50.115524   64368 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 18:30:50.115614   64368 kubeadm.go:394] duration metric: took 3m57.057412402s to StartCluster
	I0815 18:30:50.115671   64368 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:30:50.115753   64368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:30:50.156963   64368 cri.go:89] found id: ""
	I0815 18:30:50.157011   64368 logs.go:276] 0 containers: []
	W0815 18:30:50.157023   64368 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:30:50.157036   64368 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:30:50.157099   64368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:30:50.191959   64368 cri.go:89] found id: ""
	I0815 18:30:50.191993   64368 logs.go:276] 0 containers: []
	W0815 18:30:50.192004   64368 logs.go:278] No container was found matching "etcd"
	I0815 18:30:50.192012   64368 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:30:50.192072   64368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:30:50.225988   64368 cri.go:89] found id: ""
	I0815 18:30:50.226022   64368 logs.go:276] 0 containers: []
	W0815 18:30:50.226033   64368 logs.go:278] No container was found matching "coredns"
	I0815 18:30:50.226044   64368 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:30:50.226115   64368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:30:50.285980   64368 cri.go:89] found id: ""
	I0815 18:30:50.286011   64368 logs.go:276] 0 containers: []
	W0815 18:30:50.286025   64368 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:30:50.286038   64368 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:30:50.286148   64368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:30:50.319265   64368 cri.go:89] found id: ""
	I0815 18:30:50.319292   64368 logs.go:276] 0 containers: []
	W0815 18:30:50.319303   64368 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:30:50.319311   64368 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:30:50.319373   64368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:30:50.352445   64368 cri.go:89] found id: ""
	I0815 18:30:50.352471   64368 logs.go:276] 0 containers: []
	W0815 18:30:50.352479   64368 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:30:50.352495   64368 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:30:50.352553   64368 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:30:50.389616   64368 cri.go:89] found id: ""
	I0815 18:30:50.389646   64368 logs.go:276] 0 containers: []
	W0815 18:30:50.389655   64368 logs.go:278] No container was found matching "kindnet"
	I0815 18:30:50.389665   64368 logs.go:123] Gathering logs for dmesg ...
	I0815 18:30:50.389677   64368 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:30:50.405546   64368 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:30:50.405577   64368 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:30:50.523336   64368 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:30:50.523364   64368 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:30:50.523379   64368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:30:50.631854   64368 logs.go:123] Gathering logs for container status ...
	I0815 18:30:50.631891   64368 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:30:50.670964   64368 logs.go:123] Gathering logs for kubelet ...
	I0815 18:30:50.670996   64368 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 18:30:50.723714   64368 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 18:30:50.723773   64368 out.go:270] * 
	* 
	W0815 18:30:50.723842   64368 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:30:50.723862   64368 out.go:270] * 
	* 
	W0815 18:30:50.724680   64368 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 18:30:50.727644   64368 out.go:201] 
	W0815 18:30:50.729178   64368 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:30:50.729223   64368 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 18:30:50.729246   64368 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 18:30:50.730706   64368 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-278865 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865: exit status 6 (226.432715ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:30:51.000450   67564 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-278865" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-278865" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (297.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-599042 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-599042 --alsologtostderr -v=3: exit status 82 (2m0.483755782s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-599042"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 18:28:57.535297   66856 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:28:57.535412   66856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:28:57.535421   66856 out.go:358] Setting ErrFile to fd 2...
	I0815 18:28:57.535425   66856 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:28:57.535604   66856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:28:57.535814   66856 out.go:352] Setting JSON to false
	I0815 18:28:57.535883   66856 mustload.go:65] Loading cluster: no-preload-599042
	I0815 18:28:57.536205   66856 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:28:57.536273   66856 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/config.json ...
	I0815 18:28:57.536453   66856 mustload.go:65] Loading cluster: no-preload-599042
	I0815 18:28:57.536589   66856 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:28:57.536615   66856 stop.go:39] StopHost: no-preload-599042
	I0815 18:28:57.537024   66856 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:28:57.537082   66856 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:28:57.551612   66856 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46391
	I0815 18:28:57.552073   66856 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:28:57.552676   66856 main.go:141] libmachine: Using API Version  1
	I0815 18:28:57.552714   66856 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:28:57.553028   66856 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:28:57.555549   66856 out.go:177] * Stopping node "no-preload-599042"  ...
	I0815 18:28:57.556704   66856 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 18:28:57.556747   66856 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:28:57.557006   66856 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 18:28:57.557032   66856 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:28:57.560176   66856 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:28:57.560547   66856 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:27:46 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:28:57.560578   66856 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:28:57.560832   66856 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:28:57.561017   66856 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:28:57.561164   66856 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:28:57.561286   66856 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:28:57.652800   66856 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 18:28:57.711372   66856 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 18:28:57.774740   66856 main.go:141] libmachine: Stopping "no-preload-599042"...
	I0815 18:28:57.774769   66856 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:28:57.776404   66856 main.go:141] libmachine: (no-preload-599042) Calling .Stop
	I0815 18:28:57.780203   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 0/120
	I0815 18:28:58.781783   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 1/120
	I0815 18:28:59.783206   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 2/120
	I0815 18:29:00.785007   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 3/120
	I0815 18:29:01.786861   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 4/120
	I0815 18:29:02.788714   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 5/120
	I0815 18:29:03.791314   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 6/120
	I0815 18:29:04.792625   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 7/120
	I0815 18:29:05.794078   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 8/120
	I0815 18:29:06.795267   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 9/120
	I0815 18:29:07.797049   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 10/120
	I0815 18:29:08.799038   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 11/120
	I0815 18:29:09.800536   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 12/120
	I0815 18:29:10.801888   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 13/120
	I0815 18:29:11.803309   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 14/120
	I0815 18:29:12.805549   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 15/120
	I0815 18:29:13.806906   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 16/120
	I0815 18:29:14.808635   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 17/120
	I0815 18:29:15.809805   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 18/120
	I0815 18:29:16.811018   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 19/120
	I0815 18:29:17.813220   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 20/120
	I0815 18:29:18.814563   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 21/120
	I0815 18:29:19.816076   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 22/120
	I0815 18:29:20.817581   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 23/120
	I0815 18:29:21.819145   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 24/120
	I0815 18:29:22.821332   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 25/120
	I0815 18:29:23.823591   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 26/120
	I0815 18:29:24.825076   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 27/120
	I0815 18:29:25.827285   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 28/120
	I0815 18:29:26.828650   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 29/120
	I0815 18:29:27.830737   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 30/120
	I0815 18:29:28.832067   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 31/120
	I0815 18:29:29.833462   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 32/120
	I0815 18:29:30.834991   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 33/120
	I0815 18:29:31.836345   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 34/120
	I0815 18:29:32.838457   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 35/120
	I0815 18:29:33.839856   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 36/120
	I0815 18:29:34.841211   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 37/120
	I0815 18:29:35.842499   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 38/120
	I0815 18:29:36.844046   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 39/120
	I0815 18:29:37.846277   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 40/120
	I0815 18:29:38.847661   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 41/120
	I0815 18:29:39.849046   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 42/120
	I0815 18:29:40.850494   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 43/120
	I0815 18:29:41.851774   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 44/120
	I0815 18:29:42.854065   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 45/120
	I0815 18:29:43.855426   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 46/120
	I0815 18:29:44.856723   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 47/120
	I0815 18:29:45.858157   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 48/120
	I0815 18:29:46.859372   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 49/120
	I0815 18:29:47.861313   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 50/120
	I0815 18:29:48.862561   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 51/120
	I0815 18:29:49.863824   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 52/120
	I0815 18:29:50.865296   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 53/120
	I0815 18:29:51.866584   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 54/120
	I0815 18:29:52.868510   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 55/120
	I0815 18:29:53.869860   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 56/120
	I0815 18:29:54.871009   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 57/120
	I0815 18:29:55.872281   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 58/120
	I0815 18:29:56.873455   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 59/120
	I0815 18:29:57.875608   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 60/120
	I0815 18:29:58.877076   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 61/120
	I0815 18:29:59.878352   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 62/120
	I0815 18:30:00.879795   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 63/120
	I0815 18:30:01.881274   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 64/120
	I0815 18:30:02.883353   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 65/120
	I0815 18:30:03.885007   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 66/120
	I0815 18:30:04.886377   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 67/120
	I0815 18:30:05.887843   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 68/120
	I0815 18:30:06.889211   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 69/120
	I0815 18:30:07.891408   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 70/120
	I0815 18:30:08.892938   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 71/120
	I0815 18:30:09.894303   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 72/120
	I0815 18:30:10.895975   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 73/120
	I0815 18:30:11.897490   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 74/120
	I0815 18:30:12.899515   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 75/120
	I0815 18:30:13.900928   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 76/120
	I0815 18:30:14.902397   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 77/120
	I0815 18:30:15.903739   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 78/120
	I0815 18:30:16.905305   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 79/120
	I0815 18:30:17.907470   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 80/120
	I0815 18:30:18.908888   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 81/120
	I0815 18:30:19.910377   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 82/120
	I0815 18:30:20.911776   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 83/120
	I0815 18:30:21.913066   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 84/120
	I0815 18:30:22.915101   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 85/120
	I0815 18:30:23.916529   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 86/120
	I0815 18:30:24.917867   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 87/120
	I0815 18:30:25.919252   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 88/120
	I0815 18:30:26.920753   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 89/120
	I0815 18:30:27.922969   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 90/120
	I0815 18:30:28.924330   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 91/120
	I0815 18:30:29.925692   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 92/120
	I0815 18:30:30.926980   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 93/120
	I0815 18:30:31.928349   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 94/120
	I0815 18:30:32.930268   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 95/120
	I0815 18:30:33.931568   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 96/120
	I0815 18:30:34.933116   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 97/120
	I0815 18:30:35.934434   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 98/120
	I0815 18:30:36.935869   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 99/120
	I0815 18:30:37.937243   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 100/120
	I0815 18:30:38.938497   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 101/120
	I0815 18:30:39.939872   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 102/120
	I0815 18:30:40.941254   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 103/120
	I0815 18:30:41.942566   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 104/120
	I0815 18:30:42.944691   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 105/120
	I0815 18:30:43.946240   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 106/120
	I0815 18:30:44.947694   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 107/120
	I0815 18:30:45.949154   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 108/120
	I0815 18:30:46.950558   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 109/120
	I0815 18:30:47.952762   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 110/120
	I0815 18:30:48.954173   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 111/120
	I0815 18:30:49.955644   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 112/120
	I0815 18:30:50.956908   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 113/120
	I0815 18:30:51.958294   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 114/120
	I0815 18:30:52.960000   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 115/120
	I0815 18:30:53.961430   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 116/120
	I0815 18:30:54.962878   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 117/120
	I0815 18:30:55.964319   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 118/120
	I0815 18:30:56.965869   66856 main.go:141] libmachine: (no-preload-599042) Waiting for machine to stop 119/120
	I0815 18:30:57.966717   66856 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 18:30:57.966805   66856 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0815 18:30:57.968747   66856 out.go:201] 
	W0815 18:30:57.970332   66856 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0815 18:30:57.970350   66856 out.go:270] * 
	* 
	W0815 18:30:57.972969   66856 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 18:30:57.974090   66856 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-599042 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-599042 -n no-preload-599042
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-599042 -n no-preload-599042: exit status 3 (18.584515868s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:31:16.560833   67725 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.14:22: connect: no route to host
	E0815 18:31:16.560852   67725 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.14:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-599042" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-555028 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-555028 --alsologtostderr -v=3: exit status 82 (2m0.506194828s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-555028"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 18:29:33.328291   67185 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:29:33.328610   67185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:29:33.328624   67185 out.go:358] Setting ErrFile to fd 2...
	I0815 18:29:33.328629   67185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:29:33.328846   67185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:29:33.329109   67185 out.go:352] Setting JSON to false
	I0815 18:29:33.329186   67185 mustload.go:65] Loading cluster: embed-certs-555028
	I0815 18:29:33.329508   67185 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:29:33.329574   67185 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/config.json ...
	I0815 18:29:33.329753   67185 mustload.go:65] Loading cluster: embed-certs-555028
	I0815 18:29:33.329876   67185 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:29:33.329909   67185 stop.go:39] StopHost: embed-certs-555028
	I0815 18:29:33.330307   67185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:29:33.330347   67185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:29:33.344466   67185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44175
	I0815 18:29:33.344914   67185 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:29:33.345555   67185 main.go:141] libmachine: Using API Version  1
	I0815 18:29:33.345576   67185 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:29:33.345852   67185 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:29:33.348320   67185 out.go:177] * Stopping node "embed-certs-555028"  ...
	I0815 18:29:33.349663   67185 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 18:29:33.349709   67185 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:29:33.349996   67185 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 18:29:33.350015   67185 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:29:33.352837   67185 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:29:33.353284   67185 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:28:15 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:29:33.353314   67185 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:29:33.353434   67185 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:29:33.353584   67185 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:29:33.353745   67185 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:29:33.353903   67185 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:29:33.459170   67185 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 18:29:33.517781   67185 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 18:29:33.586838   67185 main.go:141] libmachine: Stopping "embed-certs-555028"...
	I0815 18:29:33.586881   67185 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:29:33.588717   67185 main.go:141] libmachine: (embed-certs-555028) Calling .Stop
	I0815 18:29:33.592660   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 0/120
	I0815 18:29:34.595211   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 1/120
	I0815 18:29:35.596479   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 2/120
	I0815 18:29:36.598015   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 3/120
	I0815 18:29:37.599949   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 4/120
	I0815 18:29:38.601861   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 5/120
	I0815 18:29:39.603074   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 6/120
	I0815 18:29:40.604536   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 7/120
	I0815 18:29:41.606229   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 8/120
	I0815 18:29:42.607685   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 9/120
	I0815 18:29:43.609787   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 10/120
	I0815 18:29:44.610979   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 11/120
	I0815 18:29:45.612465   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 12/120
	I0815 18:29:46.614040   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 13/120
	I0815 18:29:47.615160   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 14/120
	I0815 18:29:48.616609   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 15/120
	I0815 18:29:49.617932   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 16/120
	I0815 18:29:50.619342   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 17/120
	I0815 18:29:51.620653   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 18/120
	I0815 18:29:52.621902   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 19/120
	I0815 18:29:53.623897   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 20/120
	I0815 18:29:54.625110   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 21/120
	I0815 18:29:55.626450   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 22/120
	I0815 18:29:56.627674   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 23/120
	I0815 18:29:57.629021   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 24/120
	I0815 18:29:58.630935   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 25/120
	I0815 18:29:59.632345   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 26/120
	I0815 18:30:00.633758   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 27/120
	I0815 18:30:01.635328   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 28/120
	I0815 18:30:02.636819   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 29/120
	I0815 18:30:03.639075   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 30/120
	I0815 18:30:04.640364   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 31/120
	I0815 18:30:05.641805   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 32/120
	I0815 18:30:06.643222   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 33/120
	I0815 18:30:07.644771   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 34/120
	I0815 18:30:08.646736   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 35/120
	I0815 18:30:09.648253   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 36/120
	I0815 18:30:10.649805   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 37/120
	I0815 18:30:11.651184   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 38/120
	I0815 18:30:12.652867   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 39/120
	I0815 18:30:13.654924   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 40/120
	I0815 18:30:14.656591   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 41/120
	I0815 18:30:15.657950   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 42/120
	I0815 18:30:16.660044   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 43/120
	I0815 18:30:17.661424   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 44/120
	I0815 18:30:18.663317   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 45/120
	I0815 18:30:19.665003   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 46/120
	I0815 18:30:20.666897   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 47/120
	I0815 18:30:21.668608   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 48/120
	I0815 18:30:22.670468   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 49/120
	I0815 18:30:23.672381   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 50/120
	I0815 18:30:24.673685   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 51/120
	I0815 18:30:25.675533   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 52/120
	I0815 18:30:26.676778   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 53/120
	I0815 18:30:27.678638   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 54/120
	I0815 18:30:28.680269   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 55/120
	I0815 18:30:29.681662   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 56/120
	I0815 18:30:30.683702   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 57/120
	I0815 18:30:31.684998   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 58/120
	I0815 18:30:32.686723   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 59/120
	I0815 18:30:33.688460   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 60/120
	I0815 18:30:34.689845   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 61/120
	I0815 18:30:35.691365   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 62/120
	I0815 18:30:36.693118   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 63/120
	I0815 18:30:37.694518   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 64/120
	I0815 18:30:38.696306   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 65/120
	I0815 18:30:39.697725   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 66/120
	I0815 18:30:40.699121   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 67/120
	I0815 18:30:41.700516   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 68/120
	I0815 18:30:42.701818   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 69/120
	I0815 18:30:43.704036   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 70/120
	I0815 18:30:44.705476   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 71/120
	I0815 18:30:45.706664   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 72/120
	I0815 18:30:46.708142   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 73/120
	I0815 18:30:47.709460   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 74/120
	I0815 18:30:48.711702   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 75/120
	I0815 18:30:49.713407   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 76/120
	I0815 18:30:50.715016   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 77/120
	I0815 18:30:51.716220   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 78/120
	I0815 18:30:52.717534   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 79/120
	I0815 18:30:53.719518   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 80/120
	I0815 18:30:54.720835   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 81/120
	I0815 18:30:55.722304   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 82/120
	I0815 18:30:56.723706   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 83/120
	I0815 18:30:57.725348   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 84/120
	I0815 18:30:58.727523   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 85/120
	I0815 18:30:59.729021   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 86/120
	I0815 18:31:00.730444   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 87/120
	I0815 18:31:01.731890   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 88/120
	I0815 18:31:02.733289   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 89/120
	I0815 18:31:03.735608   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 90/120
	I0815 18:31:04.737087   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 91/120
	I0815 18:31:05.738527   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 92/120
	I0815 18:31:06.740120   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 93/120
	I0815 18:31:07.741587   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 94/120
	I0815 18:31:08.743665   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 95/120
	I0815 18:31:09.745196   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 96/120
	I0815 18:31:10.746776   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 97/120
	I0815 18:31:11.748259   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 98/120
	I0815 18:31:12.749592   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 99/120
	I0815 18:31:13.751724   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 100/120
	I0815 18:31:14.753946   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 101/120
	I0815 18:31:15.755288   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 102/120
	I0815 18:31:16.756850   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 103/120
	I0815 18:31:17.758262   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 104/120
	I0815 18:31:18.759916   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 105/120
	I0815 18:31:19.761484   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 106/120
	I0815 18:31:20.762951   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 107/120
	I0815 18:31:21.764367   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 108/120
	I0815 18:31:22.765643   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 109/120
	I0815 18:31:23.767612   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 110/120
	I0815 18:31:24.769121   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 111/120
	I0815 18:31:25.770498   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 112/120
	I0815 18:31:26.772104   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 113/120
	I0815 18:31:27.773654   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 114/120
	I0815 18:31:28.775711   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 115/120
	I0815 18:31:29.777180   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 116/120
	I0815 18:31:30.778722   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 117/120
	I0815 18:31:31.780553   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 118/120
	I0815 18:31:32.781989   67185 main.go:141] libmachine: (embed-certs-555028) Waiting for machine to stop 119/120
	I0815 18:31:33.783138   67185 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 18:31:33.783194   67185 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0815 18:31:33.785352   67185 out.go:201] 
	W0815 18:31:33.786904   67185 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0815 18:31:33.786921   67185 out.go:270] * 
	* 
	W0815 18:31:33.789512   67185 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 18:31:33.790777   67185 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-555028 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-555028 -n embed-certs-555028
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-555028 -n embed-certs-555028: exit status 3 (18.607871837s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:31:52.400843   67993 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.234:22: connect: no route to host
	E0815 18:31:52.400878   67993 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.234:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-555028" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-423062 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-423062 --alsologtostderr -v=3: exit status 82 (2m0.524593852s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-423062"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 18:29:54.341801   67339 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:29:54.341925   67339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:29:54.341937   67339 out.go:358] Setting ErrFile to fd 2...
	I0815 18:29:54.341944   67339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:29:54.342116   67339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:29:54.342340   67339 out.go:352] Setting JSON to false
	I0815 18:29:54.342420   67339 mustload.go:65] Loading cluster: default-k8s-diff-port-423062
	I0815 18:29:54.342757   67339 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:29:54.342824   67339 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/config.json ...
	I0815 18:29:54.342982   67339 mustload.go:65] Loading cluster: default-k8s-diff-port-423062
	I0815 18:29:54.343085   67339 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:29:54.343107   67339 stop.go:39] StopHost: default-k8s-diff-port-423062
	I0815 18:29:54.343460   67339 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:29:54.343496   67339 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:29:54.358472   67339 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45997
	I0815 18:29:54.358986   67339 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:29:54.359620   67339 main.go:141] libmachine: Using API Version  1
	I0815 18:29:54.359646   67339 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:29:54.360038   67339 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:29:54.362246   67339 out.go:177] * Stopping node "default-k8s-diff-port-423062"  ...
	I0815 18:29:54.363739   67339 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0815 18:29:54.363779   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:29:54.363998   67339 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0815 18:29:54.364037   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:29:54.367239   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:29:54.367700   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:29:54.367732   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:29:54.367837   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:29:54.368030   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:29:54.368201   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:29:54.368363   67339 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:29:54.500270   67339 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0815 18:29:54.561006   67339 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0815 18:29:54.621540   67339 main.go:141] libmachine: Stopping "default-k8s-diff-port-423062"...
	I0815 18:29:54.621589   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:29:54.623168   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Stop
	I0815 18:29:54.627045   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 0/120
	I0815 18:29:55.628059   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 1/120
	I0815 18:29:56.629951   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 2/120
	I0815 18:29:57.630988   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 3/120
	I0815 18:29:58.632125   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 4/120
	I0815 18:29:59.633834   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 5/120
	I0815 18:30:00.634853   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 6/120
	I0815 18:30:01.635913   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 7/120
	I0815 18:30:02.637206   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 8/120
	I0815 18:30:03.638685   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 9/120
	I0815 18:30:04.640230   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 10/120
	I0815 18:30:05.641635   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 11/120
	I0815 18:30:06.643026   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 12/120
	I0815 18:30:07.644420   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 13/120
	I0815 18:30:08.646349   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 14/120
	I0815 18:30:09.648583   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 15/120
	I0815 18:30:10.650053   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 16/120
	I0815 18:30:11.651576   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 17/120
	I0815 18:30:12.653030   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 18/120
	I0815 18:30:13.654404   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 19/120
	I0815 18:30:14.656714   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 20/120
	I0815 18:30:15.658144   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 21/120
	I0815 18:30:16.659922   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 22/120
	I0815 18:30:17.661291   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 23/120
	I0815 18:30:18.662850   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 24/120
	I0815 18:30:19.665078   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 25/120
	I0815 18:30:20.666560   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 26/120
	I0815 18:30:21.667933   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 27/120
	I0815 18:30:22.669399   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 28/120
	I0815 18:30:23.670854   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 29/120
	I0815 18:30:24.673287   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 30/120
	I0815 18:30:25.674552   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 31/120
	I0815 18:30:26.676202   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 32/120
	I0815 18:30:27.677732   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 33/120
	I0815 18:30:28.679311   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 34/120
	I0815 18:30:29.681404   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 35/120
	I0815 18:30:30.682849   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 36/120
	I0815 18:30:31.684229   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 37/120
	I0815 18:30:32.685908   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 38/120
	I0815 18:30:33.687464   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 39/120
	I0815 18:30:34.689569   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 40/120
	I0815 18:30:35.690945   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 41/120
	I0815 18:30:36.692246   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 42/120
	I0815 18:30:37.693995   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 43/120
	I0815 18:30:38.695351   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 44/120
	I0815 18:30:39.697444   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 45/120
	I0815 18:30:40.698925   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 46/120
	I0815 18:30:41.700391   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 47/120
	I0815 18:30:42.701968   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 48/120
	I0815 18:30:43.703861   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 49/120
	I0815 18:30:44.705335   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 50/120
	I0815 18:30:45.706750   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 51/120
	I0815 18:30:46.708274   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 52/120
	I0815 18:30:47.710300   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 53/120
	I0815 18:30:48.712010   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 54/120
	I0815 18:30:49.713928   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 55/120
	I0815 18:30:50.715232   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 56/120
	I0815 18:30:51.717266   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 57/120
	I0815 18:30:52.719015   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 58/120
	I0815 18:30:53.720047   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 59/120
	I0815 18:30:54.721731   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 60/120
	I0815 18:30:55.722900   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 61/120
	I0815 18:30:56.724088   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 62/120
	I0815 18:30:57.726004   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 63/120
	I0815 18:30:58.727373   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 64/120
	I0815 18:30:59.729198   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 65/120
	I0815 18:31:00.730853   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 66/120
	I0815 18:31:01.732430   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 67/120
	I0815 18:31:02.734166   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 68/120
	I0815 18:31:03.736042   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 69/120
	I0815 18:31:04.737957   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 70/120
	I0815 18:31:05.739159   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 71/120
	I0815 18:31:06.740527   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 72/120
	I0815 18:31:07.742422   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 73/120
	I0815 18:31:08.743895   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 74/120
	I0815 18:31:09.745778   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 75/120
	I0815 18:31:10.747113   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 76/120
	I0815 18:31:11.748529   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 77/120
	I0815 18:31:12.749816   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 78/120
	I0815 18:31:13.751435   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 79/120
	I0815 18:31:14.753823   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 80/120
	I0815 18:31:15.755157   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 81/120
	I0815 18:31:16.756647   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 82/120
	I0815 18:31:17.757890   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 83/120
	I0815 18:31:18.759462   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 84/120
	I0815 18:31:19.761488   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 85/120
	I0815 18:31:20.762836   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 86/120
	I0815 18:31:21.764186   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 87/120
	I0815 18:31:22.765649   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 88/120
	I0815 18:31:23.767006   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 89/120
	I0815 18:31:24.769358   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 90/120
	I0815 18:31:25.770781   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 91/120
	I0815 18:31:26.772428   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 92/120
	I0815 18:31:27.773871   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 93/120
	I0815 18:31:28.775320   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 94/120
	I0815 18:31:29.777485   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 95/120
	I0815 18:31:30.779461   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 96/120
	I0815 18:31:31.780823   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 97/120
	I0815 18:31:32.782304   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 98/120
	I0815 18:31:33.783858   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 99/120
	I0815 18:31:34.785253   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 100/120
	I0815 18:31:35.786799   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 101/120
	I0815 18:31:36.788206   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 102/120
	I0815 18:31:37.789706   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 103/120
	I0815 18:31:38.791277   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 104/120
	I0815 18:31:39.793314   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 105/120
	I0815 18:31:40.794727   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 106/120
	I0815 18:31:41.796178   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 107/120
	I0815 18:31:42.797729   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 108/120
	I0815 18:31:43.799448   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 109/120
	I0815 18:31:44.801264   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 110/120
	I0815 18:31:45.803098   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 111/120
	I0815 18:31:46.804474   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 112/120
	I0815 18:31:47.805991   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 113/120
	I0815 18:31:48.807456   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 114/120
	I0815 18:31:49.809644   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 115/120
	I0815 18:31:50.811149   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 116/120
	I0815 18:31:51.812580   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 117/120
	I0815 18:31:52.813978   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 118/120
	I0815 18:31:53.815699   67339 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for machine to stop 119/120
	I0815 18:31:54.817142   67339 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0815 18:31:54.817197   67339 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0815 18:31:54.819340   67339 out.go:201] 
	W0815 18:31:54.820789   67339 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0815 18:31:54.820809   67339 out.go:270] * 
	* 
	W0815 18:31:54.823469   67339 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 18:31:54.824812   67339 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-423062 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-423062 -n default-k8s-diff-port-423062
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-423062 -n default-k8s-diff-port-423062: exit status 3 (18.563564618s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:32:13.392845   68117 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host
	E0815 18:32:13.392868   68117 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-423062" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-278865 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-278865 create -f testdata/busybox.yaml: exit status 1 (45.357375ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-278865" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-278865 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865: exit status 6 (213.752473ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:30:51.261405   67619 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-278865" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-278865" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865: exit status 6 (210.237575ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:30:51.471723   67649 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-278865" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-278865" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (114.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-278865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-278865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m54.544806877s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-278865 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-278865 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-278865 describe deploy/metrics-server -n kube-system: exit status 1 (42.904192ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-278865" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-278865 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865: exit status 6 (218.298792ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:32:46.277009   68567 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-278865" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-278865" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (114.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-599042 -n no-preload-599042
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-599042 -n no-preload-599042: exit status 3 (3.167682617s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:31:19.728755   67823 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.14:22: connect: no route to host
	E0815 18:31:19.728777   67823 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.14:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-599042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-599042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152151652s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.14:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-599042 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-599042 -n no-preload-599042
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-599042 -n no-preload-599042: exit status 3 (3.063718584s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:31:28.944843   67890 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.14:22: connect: no route to host
	E0815 18:31:28.944865   67890 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.14:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-599042" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-555028 -n embed-certs-555028
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-555028 -n embed-certs-555028: exit status 3 (3.16764806s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:31:55.568835   68086 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.234:22: connect: no route to host
	E0815 18:31:55.568856   68086 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.234:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-555028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-555028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153700006s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.234:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-555028 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-555028 -n embed-certs-555028
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-555028 -n embed-certs-555028: exit status 3 (3.062201215s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:32:04.784873   68218 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.234:22: connect: no route to host
	E0815 18:32:04.784895   68218 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.234:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-555028" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-423062 -n default-k8s-diff-port-423062
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-423062 -n default-k8s-diff-port-423062: exit status 3 (3.168057617s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:32:16.560810   68317 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host
	E0815 18:32:16.560829   68317 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-423062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-423062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15275944s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-423062 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-423062 -n default-k8s-diff-port-423062
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-423062 -n default-k8s-diff-port-423062: exit status 3 (3.062842303s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 18:32:25.776814   68398 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host
	E0815 18:32:25.776834   68398 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-423062" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (740.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-278865 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0815 18:34:52.218134   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:37:47.733856   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:39:10.804666   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:39:52.218550   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-278865 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m16.869230981s)

                                                
                                                
-- stdout --
	* [old-k8s-version-278865] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-278865" primary control-plane node in "old-k8s-version-278865" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-278865" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 18:32:52.788403   68713 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:32:52.788704   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788715   68713 out.go:358] Setting ErrFile to fd 2...
	I0815 18:32:52.788719   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788916   68713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:32:52.789431   68713 out.go:352] Setting JSON to false
	I0815 18:32:52.790297   68713 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8119,"bootTime":1723738654,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:32:52.790355   68713 start.go:139] virtualization: kvm guest
	I0815 18:32:52.792478   68713 out.go:177] * [old-k8s-version-278865] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:32:52.793818   68713 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:32:52.793864   68713 notify.go:220] Checking for updates...
	I0815 18:32:52.796618   68713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:32:52.797914   68713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:32:52.799054   68713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:32:52.800337   68713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:32:52.801448   68713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:32:52.803087   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:32:52.803465   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.803521   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.819013   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0815 18:32:52.819447   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.819966   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.819985   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.820284   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.820482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.822582   68713 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 18:32:52.824024   68713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:32:52.824380   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.824425   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.839486   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0815 18:32:52.839905   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.840345   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.840367   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.840730   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.840904   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.876811   68713 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:32:52.878075   68713 start.go:297] selected driver: kvm2
	I0815 18:32:52.878098   68713 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.878240   68713 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:32:52.878920   68713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.879001   68713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:32:52.894158   68713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:32:52.894895   68713 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:32:52.894953   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:32:52.894969   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:32:52.895020   68713 start.go:340] cluster config:
	{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.895203   68713 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.897304   68713 out.go:177] * Starting "old-k8s-version-278865" primary control-plane node in "old-k8s-version-278865" cluster
	I0815 18:32:52.898737   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:32:52.898785   68713 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:32:52.898795   68713 cache.go:56] Caching tarball of preloaded images
	I0815 18:32:52.898861   68713 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:32:52.898871   68713 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0815 18:32:52.898962   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:32:52.899159   68713 start.go:360] acquireMachinesLock for old-k8s-version-278865: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:36:44.301710   68713 start.go:364] duration metric: took 3m51.402501772s to acquireMachinesLock for "old-k8s-version-278865"
	I0815 18:36:44.301771   68713 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:44.301792   68713 fix.go:54] fixHost starting: 
	I0815 18:36:44.302227   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:44.302265   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:44.319819   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I0815 18:36:44.320335   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:44.320975   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:36:44.321003   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:44.321380   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:44.321572   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:36:44.321720   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetState
	I0815 18:36:44.323551   68713 fix.go:112] recreateIfNeeded on old-k8s-version-278865: state=Stopped err=<nil>
	I0815 18:36:44.323586   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	W0815 18:36:44.323748   68713 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:44.325761   68713 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-278865" ...
	I0815 18:36:44.327259   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .Start
	I0815 18:36:44.327431   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring networks are active...
	I0815 18:36:44.328116   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network default is active
	I0815 18:36:44.328601   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network mk-old-k8s-version-278865 is active
	I0815 18:36:44.329081   68713 main.go:141] libmachine: (old-k8s-version-278865) Getting domain xml...
	I0815 18:36:44.331888   68713 main.go:141] libmachine: (old-k8s-version-278865) Creating domain...
	I0815 18:36:45.633882   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting to get IP...
	I0815 18:36:45.634842   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.635216   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.635286   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.635206   69670 retry.go:31] will retry after 300.377534ms: waiting for machine to come up
	I0815 18:36:45.937793   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.938290   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.938312   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.938236   69670 retry.go:31] will retry after 282.311084ms: waiting for machine to come up
	I0815 18:36:46.222856   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.223327   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.223350   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.223283   69670 retry.go:31] will retry after 354.299649ms: waiting for machine to come up
	I0815 18:36:46.578770   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.579337   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.579360   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.579241   69670 retry.go:31] will retry after 382.947645ms: waiting for machine to come up
	I0815 18:36:46.964003   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.964911   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.964943   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.964824   69670 retry.go:31] will retry after 710.757442ms: waiting for machine to come up
	I0815 18:36:47.676738   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:47.677422   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:47.677450   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:47.677360   69670 retry.go:31] will retry after 588.944709ms: waiting for machine to come up
	I0815 18:36:48.268221   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:48.268790   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:48.268814   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:48.268736   69670 retry.go:31] will retry after 781.489196ms: waiting for machine to come up
	I0815 18:36:49.051824   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:49.052246   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:49.052277   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:49.052182   69670 retry.go:31] will retry after 1.393037007s: waiting for machine to come up
	I0815 18:36:50.446428   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:50.446860   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:50.446892   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:50.446800   69670 retry.go:31] will retry after 1.826779004s: waiting for machine to come up
	I0815 18:36:52.275716   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:52.276208   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:52.276231   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:52.276167   69670 retry.go:31] will retry after 1.746726312s: waiting for machine to come up
	I0815 18:36:54.025067   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:54.025508   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:54.025535   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:54.025462   69670 retry.go:31] will retry after 2.693215306s: waiting for machine to come up
	I0815 18:36:56.721740   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:56.722139   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:56.722178   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:56.722070   69670 retry.go:31] will retry after 3.370623363s: waiting for machine to come up
	I0815 18:37:00.093896   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:00.094391   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:37:00.094453   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:37:00.094333   69670 retry.go:31] will retry after 2.855023319s: waiting for machine to come up
	I0815 18:37:02.950449   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950903   68713 main.go:141] libmachine: (old-k8s-version-278865) Found IP for machine: 192.168.39.89
	I0815 18:37:02.950931   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has current primary IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950941   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserving static IP address...
	I0815 18:37:02.951319   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.951356   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | skip adding static IP to network mk-old-k8s-version-278865 - found existing host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"}
	I0815 18:37:02.951376   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserved static IP address: 192.168.39.89
	I0815 18:37:02.951393   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting for SSH to be available...
	I0815 18:37:02.951424   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Getting to WaitForSSH function...
	I0815 18:37:02.953498   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.953804   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953927   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH client type: external
	I0815 18:37:02.953957   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa (-rw-------)
	I0815 18:37:02.953989   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:37:02.954001   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | About to run SSH command:
	I0815 18:37:02.954009   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | exit 0
	I0815 18:37:03.076431   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | SSH cmd err, output: <nil>: 
	I0815 18:37:03.076748   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetConfigRaw
	I0815 18:37:03.077325   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.079733   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080100   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.080132   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080332   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:37:03.080537   68713 machine.go:93] provisionDockerMachine start ...
	I0815 18:37:03.080554   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:03.080717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.082778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083140   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.083168   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083331   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.083482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083612   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083730   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.083881   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.084067   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.084078   68713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:37:03.188779   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:37:03.188813   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189045   68713 buildroot.go:166] provisioning hostname "old-k8s-version-278865"
	I0815 18:37:03.189069   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189284   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.191858   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192171   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.192192   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192328   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.192533   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192676   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192822   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.193015   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.193180   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.193192   68713 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-278865 && echo "old-k8s-version-278865" | sudo tee /etc/hostname
	I0815 18:37:03.313099   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-278865
	
	I0815 18:37:03.313129   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.315840   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316196   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.316226   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316378   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.316608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316760   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316885   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.317001   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.317184   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.317207   68713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-278865' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-278865/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-278865' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:37:03.429897   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:37:03.429934   68713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:37:03.429962   68713 buildroot.go:174] setting up certificates
	I0815 18:37:03.429972   68713 provision.go:84] configureAuth start
	I0815 18:37:03.429983   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.430274   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.432724   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433053   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.433083   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433212   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.435181   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435514   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.435543   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435657   68713 provision.go:143] copyHostCerts
	I0815 18:37:03.435715   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:37:03.435736   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:37:03.435804   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:37:03.435919   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:37:03.435929   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:37:03.435959   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:37:03.436045   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:37:03.436055   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:37:03.436082   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:37:03.436170   68713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-278865 san=[127.0.0.1 192.168.39.89 localhost minikube old-k8s-version-278865]
	I0815 18:37:03.604924   68713 provision.go:177] copyRemoteCerts
	I0815 18:37:03.604979   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:37:03.605003   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.607328   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607616   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.607634   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607821   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.608016   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.608171   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.608429   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:03.690560   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:37:03.714632   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 18:37:03.737805   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:37:03.762338   68713 provision.go:87] duration metric: took 332.353741ms to configureAuth
	I0815 18:37:03.762371   68713 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:37:03.762543   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:37:03.762608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.765626   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.765988   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.766018   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.766211   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.766380   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766574   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766712   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.766897   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.767053   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.767069   68713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:37:04.050635   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:37:04.050663   68713 machine.go:96] duration metric: took 970.113556ms to provisionDockerMachine
	I0815 18:37:04.050674   68713 start.go:293] postStartSetup for "old-k8s-version-278865" (driver="kvm2")
	I0815 18:37:04.050685   68713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:37:04.050717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.051048   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:37:04.051081   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.053709   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054095   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.054124   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054432   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.054622   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.054774   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.054914   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.139381   68713 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:37:04.145097   68713 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:37:04.145124   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:37:04.145201   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:37:04.145298   68713 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:37:04.145421   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:37:04.156166   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:04.181562   68713 start.go:296] duration metric: took 130.872499ms for postStartSetup
	I0815 18:37:04.181605   68713 fix.go:56] duration metric: took 19.879821037s for fixHost
	I0815 18:37:04.181629   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.184268   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184652   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.184682   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184917   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.185151   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185345   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185502   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.185677   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:04.185925   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:04.185938   68713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:37:04.297391   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747024.271483326
	
	I0815 18:37:04.297413   68713 fix.go:216] guest clock: 1723747024.271483326
	I0815 18:37:04.297423   68713 fix.go:229] Guest: 2024-08-15 18:37:04.271483326 +0000 UTC Remote: 2024-08-15 18:37:04.181610291 +0000 UTC m=+251.426055371 (delta=89.873035ms)
	I0815 18:37:04.297448   68713 fix.go:200] guest clock delta is within tolerance: 89.873035ms
	I0815 18:37:04.297455   68713 start.go:83] releasing machines lock for "old-k8s-version-278865", held for 19.99571173s
	I0815 18:37:04.297504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.297818   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:04.300970   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301425   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.301455   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301609   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302194   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302404   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302495   68713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:37:04.302545   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.302679   68713 ssh_runner.go:195] Run: cat /version.json
	I0815 18:37:04.302705   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.305673   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.305903   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306066   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306092   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306273   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306301   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306337   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306537   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306657   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306664   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306827   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306834   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.307009   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.409319   68713 ssh_runner.go:195] Run: systemctl --version
	I0815 18:37:04.415576   68713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:37:04.565772   68713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:37:04.571909   68713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:37:04.571996   68713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:37:04.588400   68713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:37:04.588427   68713 start.go:495] detecting cgroup driver to use...
	I0815 18:37:04.588528   68713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:37:04.604253   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:37:04.619003   68713 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:37:04.619051   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:37:04.632530   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:37:04.646080   68713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:37:04.763855   68713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:37:04.922470   68713 docker.go:233] disabling docker service ...
	I0815 18:37:04.922566   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:37:04.937301   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:37:04.950721   68713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:37:05.079767   68713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:37:05.210207   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:37:05.225569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:37:05.247998   68713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 18:37:05.248070   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.262851   68713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:37:05.262924   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.274489   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.285901   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.298749   68713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:37:05.310052   68713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:37:05.320992   68713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:37:05.321073   68713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:37:05.340323   68713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:37:05.354069   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:05.483573   68713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:37:05.647020   68713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:37:05.647094   68713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:37:05.653850   68713 start.go:563] Will wait 60s for crictl version
	I0815 18:37:05.653924   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:05.658476   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:37:05.697818   68713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:37:05.697907   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.724931   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.755831   68713 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 18:37:05.756950   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:05.759791   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760188   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:05.760220   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760468   68713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:37:05.764753   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:05.777462   68713 kubeadm.go:883] updating cluster {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:37:05.777614   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:37:05.777679   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:05.848895   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:05.848967   68713 ssh_runner.go:195] Run: which lz4
	I0815 18:37:05.853103   68713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:37:05.858012   68713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:37:05.858046   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 18:37:07.520567   68713 crio.go:462] duration metric: took 1.667489785s to copy over tarball
	I0815 18:37:07.520642   68713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:37:10.534169   68713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.013498464s)
	I0815 18:37:10.534194   68713 crio.go:469] duration metric: took 3.013602868s to extract the tarball
	I0815 18:37:10.534201   68713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:37:10.578998   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:10.619043   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:10.619146   68713 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:37:10.619246   68713 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.619247   68713 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.619278   68713 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 18:37:10.619275   68713 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.619291   68713 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.619304   68713 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.619322   68713 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.619405   68713 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621367   68713 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.621384   68713 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 18:37:10.621468   68713 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.621500   68713 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.621596   68713 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.621646   68713 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621706   68713 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.621897   68713 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.798617   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.828530   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 18:37:10.859528   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.918714   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.977028   68713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 18:37:10.977073   68713 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.977119   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:10.980573   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.985503   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.990642   68713 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 18:37:10.990684   68713 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 18:37:10.990733   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.000388   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.007526   68713 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 18:37:11.007589   68713 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.007642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.008543   68713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 18:37:11.008581   68713 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.008621   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.008642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077224   68713 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 18:37:11.077269   68713 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077228   68713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 18:37:11.077347   68713 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.077371   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111299   68713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 18:37:11.111376   68713 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.111387   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.111421   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111471   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.156942   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.156944   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.156997   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.263355   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.263448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.263455   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.263544   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.291407   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.312626   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.334606   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.427937   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.433739   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:11.435371   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.439448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.439541   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 18:37:11.450901   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.477906   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.520009   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 18:37:11.572349   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 18:37:11.686243   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 18:37:11.686295   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 18:37:11.686325   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 18:37:11.686378   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 18:37:11.686420   68713 cache_images.go:92] duration metric: took 1.067250234s to LoadCachedImages
	W0815 18:37:11.686494   68713 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0815 18:37:11.686508   68713 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.20.0 crio true true} ...
	I0815 18:37:11.686620   68713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-278865 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:37:11.686693   68713 ssh_runner.go:195] Run: crio config
	I0815 18:37:11.736781   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:37:11.736808   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:11.736824   68713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:37:11.736851   68713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-278865 NodeName:old-k8s-version-278865 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 18:37:11.737039   68713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-278865"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:37:11.737120   68713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 18:37:11.747511   68713 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:37:11.747585   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:37:11.757850   68713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 18:37:11.775982   68713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:37:11.792938   68713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 18:37:11.811576   68713 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I0815 18:37:11.815708   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:11.829992   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:11.983884   68713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:12.002603   68713 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865 for IP: 192.168.39.89
	I0815 18:37:12.002632   68713 certs.go:194] generating shared ca certs ...
	I0815 18:37:12.002682   68713 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.002867   68713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:37:12.002926   68713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:37:12.002942   68713 certs.go:256] generating profile certs ...
	I0815 18:37:12.025160   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.key
	I0815 18:37:12.025296   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a
	I0815 18:37:12.025351   68713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key
	I0815 18:37:12.025516   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:37:12.025578   68713 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:37:12.025591   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:37:12.025627   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:37:12.025661   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:37:12.025691   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:37:12.025746   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:12.026614   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:37:12.066771   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:37:12.109649   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:37:12.176744   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:37:12.207990   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 18:37:12.244999   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:37:12.282338   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:37:12.308761   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:37:12.332316   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:37:12.355977   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:37:12.379169   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:37:12.405472   68713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:37:12.424110   68713 ssh_runner.go:195] Run: openssl version
	I0815 18:37:12.430231   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:37:12.441531   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.445971   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.446061   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.452134   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:37:12.466809   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:37:12.478211   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482659   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482708   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.490225   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:37:12.504908   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:37:12.516825   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521854   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521911   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.527884   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:37:12.539398   68713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:37:12.544010   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:37:12.549918   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:37:12.555714   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:37:12.561895   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:37:12.567736   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:37:12.573664   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:37:12.579510   68713 kubeadm.go:392] StartCluster: {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:37:12.579627   68713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:37:12.579688   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.621503   68713 cri.go:89] found id: ""
	I0815 18:37:12.621576   68713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:37:12.632722   68713 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:37:12.632746   68713 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:37:12.632796   68713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:37:12.643192   68713 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:37:12.644607   68713 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-278865" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:37:12.645629   68713 kubeconfig.go:62] /home/jenkins/minikube-integration/19450-13013/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-278865" cluster setting kubeconfig missing "old-k8s-version-278865" context setting]
	I0815 18:37:12.647073   68713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.653052   68713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:37:12.665777   68713 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.89
	I0815 18:37:12.665808   68713 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:37:12.665821   68713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:37:12.665872   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.713574   68713 cri.go:89] found id: ""
	I0815 18:37:12.713641   68713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:37:12.731459   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:37:12.741769   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:37:12.741789   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:37:12.741833   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:37:12.750990   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:37:12.751049   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:37:12.761621   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:37:12.771204   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:37:12.771261   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:37:12.782012   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:37:12.791928   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:37:12.791994   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:37:12.801858   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:37:12.811023   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:37:12.811083   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:37:12.822189   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:37:12.834293   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:12.974325   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.452192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.690442   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.798270   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.900783   68713 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:37:13.900877   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.401954   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.901809   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.401755   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.901010   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.401794   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.901149   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:17.401599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:17.901511   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.401720   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.900976   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.401223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.901522   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.901573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:22.401279   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:22.901608   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.401519   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.901287   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.401831   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.901547   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.401220   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.901109   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.401441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.901515   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:27.401258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:27.901777   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.401103   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.901746   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.401521   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.901691   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.401326   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.901672   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.401534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.901013   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:32.401696   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:32.901441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.901095   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.401705   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.901020   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.401019   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.901094   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.400952   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.901717   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:37.401701   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:37.901353   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.401426   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.901599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.401173   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.901593   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.401758   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.401698   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:42.401409   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:42.901106   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.401146   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.901869   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.401483   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.901302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.401505   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.901504   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.401025   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.901713   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:47.401588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:47.901026   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.401023   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.901661   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.401358   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.901410   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.401040   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.901695   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.401365   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.901733   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:52.401439   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:52.901361   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.401417   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.901380   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.401820   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.901113   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.401270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.900941   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.901834   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:57.401496   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:57.901938   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.401246   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.900950   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.400984   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.401707   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.901455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.901613   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:02.401302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:02.901914   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.401357   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.901258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.400961   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.401852   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.901115   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.401170   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.901694   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:07.401816   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:07.900966   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.401136   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.901534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.400982   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.901126   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.401120   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.901175   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.401704   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.901710   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:12.401712   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:12.901680   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.401532   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.901198   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:13.901295   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:13.938743   68713 cri.go:89] found id: ""
	I0815 18:38:13.938770   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.938778   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:13.938786   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:13.938843   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:13.971997   68713 cri.go:89] found id: ""
	I0815 18:38:13.972029   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.972041   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:13.972048   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:13.972111   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:14.006793   68713 cri.go:89] found id: ""
	I0815 18:38:14.006825   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.006836   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:14.006844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:14.006903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:14.041546   68713 cri.go:89] found id: ""
	I0815 18:38:14.041575   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.041587   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:14.041595   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:14.041680   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:14.077614   68713 cri.go:89] found id: ""
	I0815 18:38:14.077639   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.077648   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:14.077653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:14.077704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:14.113683   68713 cri.go:89] found id: ""
	I0815 18:38:14.113711   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.113721   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:14.113730   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:14.113790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:14.149581   68713 cri.go:89] found id: ""
	I0815 18:38:14.149608   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.149616   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:14.149622   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:14.149678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:14.191576   68713 cri.go:89] found id: ""
	I0815 18:38:14.191606   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.191614   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:14.191622   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:14.191635   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:14.243253   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:14.243287   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:14.256818   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:14.256845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:14.382914   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:14.382933   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:14.382948   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:14.461826   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:14.461859   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.005615   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:17.020977   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:17.021042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:17.070191   68713 cri.go:89] found id: ""
	I0815 18:38:17.070220   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.070232   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:17.070239   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:17.070301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:17.118582   68713 cri.go:89] found id: ""
	I0815 18:38:17.118612   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.118624   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:17.118631   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:17.118693   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:17.165380   68713 cri.go:89] found id: ""
	I0815 18:38:17.165404   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.165413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:17.165421   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:17.165483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:17.204630   68713 cri.go:89] found id: ""
	I0815 18:38:17.204660   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.204670   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:17.204678   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:17.204740   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:17.239182   68713 cri.go:89] found id: ""
	I0815 18:38:17.239210   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.239219   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:17.239226   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:17.239285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:17.276329   68713 cri.go:89] found id: ""
	I0815 18:38:17.276356   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.276367   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:17.276375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:17.276472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:17.312387   68713 cri.go:89] found id: ""
	I0815 18:38:17.312418   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.312427   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:17.312433   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:17.312485   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:17.348277   68713 cri.go:89] found id: ""
	I0815 18:38:17.348300   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.348308   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:17.348315   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:17.348334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:17.424886   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:17.424924   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.465491   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:17.465518   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:17.517687   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:17.517719   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:17.531928   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:17.531959   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:17.606987   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:20.107740   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:20.123194   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:20.123255   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:20.163586   68713 cri.go:89] found id: ""
	I0815 18:38:20.163608   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.163619   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:20.163627   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:20.163676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:20.200171   68713 cri.go:89] found id: ""
	I0815 18:38:20.200196   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.200204   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:20.200210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:20.200270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:20.234739   68713 cri.go:89] found id: ""
	I0815 18:38:20.234770   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.234781   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:20.234788   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:20.234849   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:20.270182   68713 cri.go:89] found id: ""
	I0815 18:38:20.270206   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.270215   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:20.270220   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:20.270281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:20.303643   68713 cri.go:89] found id: ""
	I0815 18:38:20.303672   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.303682   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:20.303690   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:20.303748   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:20.339399   68713 cri.go:89] found id: ""
	I0815 18:38:20.339431   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.339441   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:20.339449   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:20.339511   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:20.377220   68713 cri.go:89] found id: ""
	I0815 18:38:20.377245   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.377252   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:20.377258   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:20.377310   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:20.411202   68713 cri.go:89] found id: ""
	I0815 18:38:20.411238   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.411249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:20.411268   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:20.411282   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:20.462846   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:20.462879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:20.476569   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:20.476597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:20.554243   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:20.554269   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:20.554285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:20.637450   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:20.637493   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:23.182633   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:23.196953   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:23.197026   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:23.232011   68713 cri.go:89] found id: ""
	I0815 18:38:23.232039   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.232051   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:23.232064   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:23.232114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:23.266963   68713 cri.go:89] found id: ""
	I0815 18:38:23.266992   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.267000   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:23.267006   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:23.267069   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:23.306473   68713 cri.go:89] found id: ""
	I0815 18:38:23.306496   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.306504   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:23.306510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:23.306574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:23.343542   68713 cri.go:89] found id: ""
	I0815 18:38:23.343574   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.343585   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:23.343592   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:23.343652   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:23.382468   68713 cri.go:89] found id: ""
	I0815 18:38:23.382527   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.382539   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:23.382547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:23.382612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:23.418857   68713 cri.go:89] found id: ""
	I0815 18:38:23.418882   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.418891   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:23.418897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:23.418956   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:23.460971   68713 cri.go:89] found id: ""
	I0815 18:38:23.461004   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.461016   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:23.461023   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:23.461100   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:23.494139   68713 cri.go:89] found id: ""
	I0815 18:38:23.494172   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.494183   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:23.494194   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:23.494208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:23.547874   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:23.547908   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:23.562251   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:23.562278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:23.636503   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:23.636528   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:23.636545   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:23.716020   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:23.716051   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.255081   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:26.270118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:26.270184   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:26.308586   68713 cri.go:89] found id: ""
	I0815 18:38:26.308612   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.308623   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:26.308630   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:26.308688   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:26.344364   68713 cri.go:89] found id: ""
	I0815 18:38:26.344394   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.344410   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:26.344418   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:26.344533   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:26.381621   68713 cri.go:89] found id: ""
	I0815 18:38:26.381642   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.381650   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:26.381655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:26.381699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:26.416091   68713 cri.go:89] found id: ""
	I0815 18:38:26.416118   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.416128   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:26.416136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:26.416195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:26.456038   68713 cri.go:89] found id: ""
	I0815 18:38:26.456068   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.456080   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:26.456088   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:26.456151   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:26.490728   68713 cri.go:89] found id: ""
	I0815 18:38:26.490758   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.490769   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:26.490776   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:26.490837   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:26.529388   68713 cri.go:89] found id: ""
	I0815 18:38:26.529422   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.529434   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:26.529440   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:26.529489   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:26.567452   68713 cri.go:89] found id: ""
	I0815 18:38:26.567475   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.567484   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:26.567491   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:26.567503   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:26.641841   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:26.641863   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:26.641879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:26.719403   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:26.719438   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.760460   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:26.760507   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:26.814450   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:26.814480   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:29.329451   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:29.344634   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:29.344706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:29.379278   68713 cri.go:89] found id: ""
	I0815 18:38:29.379308   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.379319   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:29.379326   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:29.379385   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:29.411854   68713 cri.go:89] found id: ""
	I0815 18:38:29.411881   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.411891   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:29.411898   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:29.411965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:29.443975   68713 cri.go:89] found id: ""
	I0815 18:38:29.444004   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.444014   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:29.444022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:29.444081   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:29.477919   68713 cri.go:89] found id: ""
	I0815 18:38:29.477944   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.477954   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:29.477962   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:29.478020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:29.518944   68713 cri.go:89] found id: ""
	I0815 18:38:29.518973   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.518985   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:29.518992   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:29.519052   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:29.553876   68713 cri.go:89] found id: ""
	I0815 18:38:29.553903   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.553913   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:29.553921   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:29.553974   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:29.590768   68713 cri.go:89] found id: ""
	I0815 18:38:29.590804   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.590815   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:29.590823   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:29.590879   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:29.625553   68713 cri.go:89] found id: ""
	I0815 18:38:29.625578   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.625586   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:29.625595   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:29.625606   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:29.668447   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:29.668478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:29.721002   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:29.721035   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:29.734955   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:29.734983   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:29.808703   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:29.808726   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:29.808742   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.397781   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:32.413876   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:32.413937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:32.453689   68713 cri.go:89] found id: ""
	I0815 18:38:32.453720   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.453777   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:32.453791   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:32.453839   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:32.490529   68713 cri.go:89] found id: ""
	I0815 18:38:32.490559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.490567   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:32.490573   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:32.490622   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:32.527680   68713 cri.go:89] found id: ""
	I0815 18:38:32.527710   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.527720   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:32.527727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:32.527790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:32.564619   68713 cri.go:89] found id: ""
	I0815 18:38:32.564656   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.564667   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:32.564677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:32.564745   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:32.600530   68713 cri.go:89] found id: ""
	I0815 18:38:32.600559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.600570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:32.600577   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:32.600639   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:32.636779   68713 cri.go:89] found id: ""
	I0815 18:38:32.636813   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.636821   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:32.636828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:32.636897   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:32.673743   68713 cri.go:89] found id: ""
	I0815 18:38:32.673774   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.673786   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:32.673794   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:32.673853   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:32.709678   68713 cri.go:89] found id: ""
	I0815 18:38:32.709708   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.709719   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:32.709730   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:32.709744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.785961   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:32.785998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:32.828205   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:32.828237   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:32.894624   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:32.894666   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:32.910742   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:32.910769   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:32.980853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:35.481438   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:35.495373   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:35.495444   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:35.529184   68713 cri.go:89] found id: ""
	I0815 18:38:35.529212   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.529221   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:35.529226   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:35.529275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:35.565188   68713 cri.go:89] found id: ""
	I0815 18:38:35.565214   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.565221   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:35.565227   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:35.565281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:35.600386   68713 cri.go:89] found id: ""
	I0815 18:38:35.600416   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.600428   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:35.600435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:35.600519   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:35.634255   68713 cri.go:89] found id: ""
	I0815 18:38:35.634278   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.634287   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:35.634293   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:35.634339   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:35.670236   68713 cri.go:89] found id: ""
	I0815 18:38:35.670260   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.670268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:35.670273   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:35.670354   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:35.707691   68713 cri.go:89] found id: ""
	I0815 18:38:35.707714   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.707722   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:35.707727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:35.707782   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:35.745791   68713 cri.go:89] found id: ""
	I0815 18:38:35.745820   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.745832   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:35.745844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:35.745916   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:35.784167   68713 cri.go:89] found id: ""
	I0815 18:38:35.784195   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.784205   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:35.784217   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:35.784234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:35.864681   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:35.864711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:35.906831   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:35.906858   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:35.960328   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:35.960366   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:35.974401   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:35.974428   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:36.044789   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:38.545951   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:38.561473   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:38.561540   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:38.597621   68713 cri.go:89] found id: ""
	I0815 18:38:38.597658   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.597668   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:38.597679   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:38.597756   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:38.632081   68713 cri.go:89] found id: ""
	I0815 18:38:38.632115   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.632127   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:38.632135   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:38.632203   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:38.669917   68713 cri.go:89] found id: ""
	I0815 18:38:38.669944   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.669952   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:38.669958   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:38.670015   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:38.707552   68713 cri.go:89] found id: ""
	I0815 18:38:38.707574   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.707582   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:38.707588   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:38.707642   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:38.746054   68713 cri.go:89] found id: ""
	I0815 18:38:38.746082   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.746093   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:38.746101   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:38.746166   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:38.783901   68713 cri.go:89] found id: ""
	I0815 18:38:38.783933   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.783945   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:38.783952   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:38.784018   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:38.825411   68713 cri.go:89] found id: ""
	I0815 18:38:38.825441   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.825452   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:38.825459   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:38.825520   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:38.863174   68713 cri.go:89] found id: ""
	I0815 18:38:38.863219   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.863231   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:38.863241   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:38.863254   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:38.914016   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:38.914045   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:38.927634   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:38.927659   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:38.993380   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:38.993407   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:38.993422   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:39.077075   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:39.077116   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:41.620219   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:41.633572   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:41.633628   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:41.670330   68713 cri.go:89] found id: ""
	I0815 18:38:41.670351   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.670358   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:41.670364   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:41.670418   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:41.706467   68713 cri.go:89] found id: ""
	I0815 18:38:41.706494   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.706502   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:41.706508   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:41.706564   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:41.742915   68713 cri.go:89] found id: ""
	I0815 18:38:41.742958   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.742970   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:41.742978   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:41.743044   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:41.778650   68713 cri.go:89] found id: ""
	I0815 18:38:41.778679   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.778687   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:41.778692   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:41.778739   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:41.813329   68713 cri.go:89] found id: ""
	I0815 18:38:41.813358   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.813369   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:41.813375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:41.813427   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:41.851351   68713 cri.go:89] found id: ""
	I0815 18:38:41.851383   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.851391   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:41.851398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:41.851460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:41.895097   68713 cri.go:89] found id: ""
	I0815 18:38:41.895130   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.895142   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:41.895150   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:41.895209   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:41.931306   68713 cri.go:89] found id: ""
	I0815 18:38:41.931336   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.931353   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:41.931365   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:41.931381   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:41.944796   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:41.944828   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:42.018868   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:42.018891   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:42.018903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:42.104304   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:42.104340   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:42.143625   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:42.143655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:44.698568   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:44.712171   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:44.712247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:44.747043   68713 cri.go:89] found id: ""
	I0815 18:38:44.747071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.747079   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:44.747085   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:44.747143   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:44.782660   68713 cri.go:89] found id: ""
	I0815 18:38:44.782691   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.782703   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:44.782711   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:44.782765   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:44.821111   68713 cri.go:89] found id: ""
	I0815 18:38:44.821138   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.821146   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:44.821152   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:44.821222   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:44.859602   68713 cri.go:89] found id: ""
	I0815 18:38:44.859635   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.859647   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:44.859655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:44.859717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:44.895037   68713 cri.go:89] found id: ""
	I0815 18:38:44.895071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.895083   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:44.895090   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:44.895175   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:44.928729   68713 cri.go:89] found id: ""
	I0815 18:38:44.928759   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.928771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:44.928781   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:44.928844   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:44.963945   68713 cri.go:89] found id: ""
	I0815 18:38:44.963977   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.963987   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:44.963996   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:44.964060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:45.001166   68713 cri.go:89] found id: ""
	I0815 18:38:45.001195   68713 logs.go:276] 0 containers: []
	W0815 18:38:45.001206   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:45.001218   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:45.001234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:45.015181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:45.015209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:45.084297   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:45.084322   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:45.084334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:45.173833   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:45.173866   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:45.211863   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:45.211899   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:47.771009   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:47.784865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:47.784926   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:47.818497   68713 cri.go:89] found id: ""
	I0815 18:38:47.818526   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.818538   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:47.818545   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:47.818608   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:47.857900   68713 cri.go:89] found id: ""
	I0815 18:38:47.857927   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.857935   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:47.857941   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:47.857997   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:47.895778   68713 cri.go:89] found id: ""
	I0815 18:38:47.895809   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.895822   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:47.895829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:47.895887   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:47.937410   68713 cri.go:89] found id: ""
	I0815 18:38:47.937434   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.937442   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:47.937448   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:47.937505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:47.976414   68713 cri.go:89] found id: ""
	I0815 18:38:47.976442   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.976450   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:47.976455   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:47.976525   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:48.014863   68713 cri.go:89] found id: ""
	I0815 18:38:48.014891   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.014899   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:48.014906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:48.014969   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:48.053508   68713 cri.go:89] found id: ""
	I0815 18:38:48.053536   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.053546   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:48.053554   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:48.053624   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:48.088900   68713 cri.go:89] found id: ""
	I0815 18:38:48.088931   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.088943   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:48.088954   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:48.088969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:48.140415   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:48.140447   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:48.155958   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:48.155985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:48.229338   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:48.229368   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:48.229383   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:48.317776   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:48.317814   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:50.860592   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:50.877070   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:50.877154   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:50.937590   68713 cri.go:89] found id: ""
	I0815 18:38:50.937615   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.937622   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:50.937628   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:50.937687   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:50.972573   68713 cri.go:89] found id: ""
	I0815 18:38:50.972603   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.972614   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:50.972622   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:50.972683   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:51.008786   68713 cri.go:89] found id: ""
	I0815 18:38:51.008811   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.008820   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:51.008826   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:51.008875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:51.043076   68713 cri.go:89] found id: ""
	I0815 18:38:51.043105   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.043116   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:51.043123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:51.043186   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:51.078344   68713 cri.go:89] found id: ""
	I0815 18:38:51.078379   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.078391   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:51.078398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:51.078453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:51.114494   68713 cri.go:89] found id: ""
	I0815 18:38:51.114521   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.114532   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:51.114540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:51.114600   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:51.153871   68713 cri.go:89] found id: ""
	I0815 18:38:51.153898   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.153909   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:51.153917   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:51.153984   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:51.187908   68713 cri.go:89] found id: ""
	I0815 18:38:51.187937   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.187948   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:51.187959   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:51.187974   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:51.264172   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:51.264198   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:51.264214   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:51.345238   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:51.345285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:51.385720   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:51.385745   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:51.443313   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:51.443353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:53.959176   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:53.972031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:53.972101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:54.010673   68713 cri.go:89] found id: ""
	I0815 18:38:54.010699   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.010707   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:54.010714   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:54.010775   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:54.045632   68713 cri.go:89] found id: ""
	I0815 18:38:54.045662   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.045672   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:54.045678   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:54.045727   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:54.082111   68713 cri.go:89] found id: ""
	I0815 18:38:54.082134   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.082142   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:54.082148   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:54.082206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:54.118210   68713 cri.go:89] found id: ""
	I0815 18:38:54.118232   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.118239   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:54.118246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:54.118305   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:54.155474   68713 cri.go:89] found id: ""
	I0815 18:38:54.155499   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.155508   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:54.155515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:54.155572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:54.193263   68713 cri.go:89] found id: ""
	I0815 18:38:54.193298   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.193305   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:54.193311   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:54.193365   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:54.233389   68713 cri.go:89] found id: ""
	I0815 18:38:54.233416   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.233428   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:54.233435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:54.233502   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:54.266127   68713 cri.go:89] found id: ""
	I0815 18:38:54.266155   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.266164   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:54.266176   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:54.266199   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:54.318724   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:54.318762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:54.332993   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:54.333022   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:54.405895   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:54.405915   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:54.405926   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:54.485819   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:54.485875   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.024956   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:57.038182   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:57.038246   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:57.078020   68713 cri.go:89] found id: ""
	I0815 18:38:57.078044   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.078055   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:57.078063   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:57.078127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:57.115077   68713 cri.go:89] found id: ""
	I0815 18:38:57.115101   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.115110   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:57.115118   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:57.115179   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:57.152711   68713 cri.go:89] found id: ""
	I0815 18:38:57.152737   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.152747   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:57.152755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:57.152819   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:57.190000   68713 cri.go:89] found id: ""
	I0815 18:38:57.190034   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.190042   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:57.190048   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:57.190096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:57.224947   68713 cri.go:89] found id: ""
	I0815 18:38:57.224978   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.224990   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:57.224998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:57.225060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:57.262329   68713 cri.go:89] found id: ""
	I0815 18:38:57.262365   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.262375   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:57.262383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:57.262458   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:57.299471   68713 cri.go:89] found id: ""
	I0815 18:38:57.299498   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.299507   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:57.299513   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:57.299572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:57.357163   68713 cri.go:89] found id: ""
	I0815 18:38:57.357202   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.357211   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:57.357220   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:57.357236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.405154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:57.405184   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:57.459245   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:57.459277   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:57.473663   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:57.473699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:57.546670   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:57.546699   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:57.546715   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:00.124455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:00.137985   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:00.138053   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:00.175201   68713 cri.go:89] found id: ""
	I0815 18:39:00.175231   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.175242   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:00.175250   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:00.175328   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:00.209376   68713 cri.go:89] found id: ""
	I0815 18:39:00.209406   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.209418   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:00.209426   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:00.209484   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:00.246860   68713 cri.go:89] found id: ""
	I0815 18:39:00.246889   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.246899   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:00.246906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:00.246965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:00.282787   68713 cri.go:89] found id: ""
	I0815 18:39:00.282814   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.282823   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:00.282829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:00.282875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:00.330227   68713 cri.go:89] found id: ""
	I0815 18:39:00.330256   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.330268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:00.330275   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:00.330338   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:00.363028   68713 cri.go:89] found id: ""
	I0815 18:39:00.363061   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.363072   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:00.363079   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:00.363169   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:00.400484   68713 cri.go:89] found id: ""
	I0815 18:39:00.400522   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.400533   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:00.400540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:00.400597   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:00.436187   68713 cri.go:89] found id: ""
	I0815 18:39:00.436225   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.436238   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:00.436252   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:00.436267   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:00.481960   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:00.481985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:00.535103   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:00.535138   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:00.548541   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:00.548568   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:00.619476   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:00.619505   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:00.619525   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:03.206473   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:03.222893   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:03.222967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:03.272249   68713 cri.go:89] found id: ""
	I0815 18:39:03.272275   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.272283   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:03.272291   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:03.272355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:03.336786   68713 cri.go:89] found id: ""
	I0815 18:39:03.336811   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.336819   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:03.336825   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:03.336884   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:03.375977   68713 cri.go:89] found id: ""
	I0815 18:39:03.376002   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.376011   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:03.376016   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:03.376063   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:03.410304   68713 cri.go:89] found id: ""
	I0815 18:39:03.410326   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.410335   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:03.410340   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:03.410403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:03.446147   68713 cri.go:89] found id: ""
	I0815 18:39:03.446176   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.446188   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:03.446195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:03.446256   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:03.482555   68713 cri.go:89] found id: ""
	I0815 18:39:03.482582   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.482591   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:03.482597   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:03.482654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:03.519477   68713 cri.go:89] found id: ""
	I0815 18:39:03.519503   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.519511   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:03.519517   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:03.519574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:03.556539   68713 cri.go:89] found id: ""
	I0815 18:39:03.556566   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.556577   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:03.556587   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:03.556602   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:03.610553   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:03.610593   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:03.625799   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:03.625827   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:03.697106   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:03.697132   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:03.697149   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:03.779089   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:03.779120   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:06.319280   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:06.333284   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:06.333355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:06.369696   68713 cri.go:89] found id: ""
	I0815 18:39:06.369719   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.369727   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:06.369732   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:06.369780   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:06.405023   68713 cri.go:89] found id: ""
	I0815 18:39:06.405046   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.405053   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:06.405059   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:06.405113   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:06.439948   68713 cri.go:89] found id: ""
	I0815 18:39:06.439974   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.439983   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:06.439989   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:06.440048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:06.475613   68713 cri.go:89] found id: ""
	I0815 18:39:06.475642   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.475654   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:06.475664   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:06.475723   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:06.510698   68713 cri.go:89] found id: ""
	I0815 18:39:06.510721   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.510729   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:06.510735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:06.510783   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:06.545831   68713 cri.go:89] found id: ""
	I0815 18:39:06.545861   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.545873   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:06.545880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:06.545940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:06.579027   68713 cri.go:89] found id: ""
	I0815 18:39:06.579053   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.579064   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:06.579072   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:06.579132   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:06.615308   68713 cri.go:89] found id: ""
	I0815 18:39:06.615339   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.615352   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:06.615371   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:06.615396   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:06.671523   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:06.671555   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:06.685556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:06.685580   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:06.765036   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:06.765059   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:06.765071   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:06.843412   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:06.843457   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:09.390799   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:09.404099   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:09.404160   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:09.439534   68713 cri.go:89] found id: ""
	I0815 18:39:09.439563   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.439582   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:09.439591   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:09.439654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:09.478933   68713 cri.go:89] found id: ""
	I0815 18:39:09.478963   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.478974   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:09.478982   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:09.479042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:09.514396   68713 cri.go:89] found id: ""
	I0815 18:39:09.514425   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.514436   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:09.514444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:09.514510   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:09.547749   68713 cri.go:89] found id: ""
	I0815 18:39:09.547775   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.547785   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:09.547793   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:09.547856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:09.583583   68713 cri.go:89] found id: ""
	I0815 18:39:09.583611   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.583623   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:09.583631   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:09.583695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:09.616530   68713 cri.go:89] found id: ""
	I0815 18:39:09.616560   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.616570   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:09.616576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:09.616641   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:09.655167   68713 cri.go:89] found id: ""
	I0815 18:39:09.655189   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.655199   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:09.655207   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:09.655263   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:09.691368   68713 cri.go:89] found id: ""
	I0815 18:39:09.691391   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.691398   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:09.691411   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:09.691426   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:09.740739   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:09.740770   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:09.755049   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:09.755074   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:09.825053   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:09.825080   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:09.825095   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:09.903036   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:09.903076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:12.441898   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:12.454637   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:12.454712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:12.494604   68713 cri.go:89] found id: ""
	I0815 18:39:12.494632   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.494640   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:12.494646   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:12.494699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:12.531587   68713 cri.go:89] found id: ""
	I0815 18:39:12.531631   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.531642   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:12.531649   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:12.531710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:12.564991   68713 cri.go:89] found id: ""
	I0815 18:39:12.565014   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.565021   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:12.565027   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:12.565096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:12.600667   68713 cri.go:89] found id: ""
	I0815 18:39:12.600698   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.600709   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:12.600715   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:12.600777   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:12.633658   68713 cri.go:89] found id: ""
	I0815 18:39:12.633681   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.633691   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:12.633698   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:12.633759   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:12.673709   68713 cri.go:89] found id: ""
	I0815 18:39:12.673730   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.673737   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:12.673743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:12.673790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:12.707353   68713 cri.go:89] found id: ""
	I0815 18:39:12.707378   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.707385   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:12.707390   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:12.707437   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:12.746926   68713 cri.go:89] found id: ""
	I0815 18:39:12.746949   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.746957   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:12.746965   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:12.746977   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:12.792154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:12.792180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:12.843933   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:12.843968   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:12.859583   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:12.859609   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:12.940856   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:12.940880   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:12.940895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:15.520265   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:15.533677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:15.533754   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:15.572109   68713 cri.go:89] found id: ""
	I0815 18:39:15.572135   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.572145   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:15.572153   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:15.572221   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:15.607442   68713 cri.go:89] found id: ""
	I0815 18:39:15.607472   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.607484   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:15.607492   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:15.607551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:15.642206   68713 cri.go:89] found id: ""
	I0815 18:39:15.642230   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.642238   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:15.642246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:15.642308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:15.677914   68713 cri.go:89] found id: ""
	I0815 18:39:15.677945   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.677956   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:15.677963   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:15.678049   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:15.714466   68713 cri.go:89] found id: ""
	I0815 18:39:15.714496   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.714504   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:15.714510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:15.714563   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:15.750961   68713 cri.go:89] found id: ""
	I0815 18:39:15.750987   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.750995   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:15.751002   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:15.751050   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:15.785399   68713 cri.go:89] found id: ""
	I0815 18:39:15.785434   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.785444   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:15.785450   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:15.785498   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:15.821547   68713 cri.go:89] found id: ""
	I0815 18:39:15.821571   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.821578   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:15.821586   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:15.821597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:15.875299   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:15.875329   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:15.890376   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:15.890408   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:15.957317   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:15.957337   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:15.957352   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:16.033952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:16.033997   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:18.571953   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:18.584652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:18.584721   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:18.617043   68713 cri.go:89] found id: ""
	I0815 18:39:18.617066   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.617073   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:18.617079   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:18.617127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:18.651080   68713 cri.go:89] found id: ""
	I0815 18:39:18.651112   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.651123   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:18.651130   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:18.651187   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:18.686857   68713 cri.go:89] found id: ""
	I0815 18:39:18.686890   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.686901   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:18.686909   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:18.686975   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:18.719397   68713 cri.go:89] found id: ""
	I0815 18:39:18.719434   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.719444   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:18.719452   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:18.719521   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:18.758316   68713 cri.go:89] found id: ""
	I0815 18:39:18.758349   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.758360   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:18.758366   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:18.758435   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:18.791586   68713 cri.go:89] found id: ""
	I0815 18:39:18.791609   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.791617   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:18.791623   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:18.791690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:18.827905   68713 cri.go:89] found id: ""
	I0815 18:39:18.827929   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.827937   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:18.827945   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:18.828004   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:18.869371   68713 cri.go:89] found id: ""
	I0815 18:39:18.869404   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.869412   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:18.869420   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:18.869432   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:18.920124   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:18.920158   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:18.936137   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:18.936168   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:19.006877   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:19.006902   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:19.006913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:19.088909   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:19.088953   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:21.632734   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:21.647246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:21.647322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:21.685574   68713 cri.go:89] found id: ""
	I0815 18:39:21.685606   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.685614   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:21.685620   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:21.685676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:21.717073   68713 cri.go:89] found id: ""
	I0815 18:39:21.717112   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.717124   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:21.717133   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:21.717205   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:21.752072   68713 cri.go:89] found id: ""
	I0815 18:39:21.752101   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.752112   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:21.752120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:21.752182   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:21.786811   68713 cri.go:89] found id: ""
	I0815 18:39:21.786834   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.786842   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:21.786848   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:21.786893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:21.823694   68713 cri.go:89] found id: ""
	I0815 18:39:21.823719   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.823728   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:21.823733   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:21.823790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:21.859358   68713 cri.go:89] found id: ""
	I0815 18:39:21.859387   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.859398   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:21.859405   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:21.859469   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:21.893723   68713 cri.go:89] found id: ""
	I0815 18:39:21.893751   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.893761   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:21.893769   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:21.893826   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:21.929338   68713 cri.go:89] found id: ""
	I0815 18:39:21.929368   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.929379   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:21.929388   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:21.929414   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:21.979107   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:21.979141   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:21.993968   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:21.994005   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:22.063359   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:22.063384   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:22.063398   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:22.144303   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:22.144337   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:24.688104   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:24.701230   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:24.701298   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:24.735056   68713 cri.go:89] found id: ""
	I0815 18:39:24.735086   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.735097   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:24.735104   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:24.735172   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:24.769704   68713 cri.go:89] found id: ""
	I0815 18:39:24.769732   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.769743   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:24.769751   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:24.769812   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:24.808763   68713 cri.go:89] found id: ""
	I0815 18:39:24.808788   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.808796   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:24.808807   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:24.808856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:24.846997   68713 cri.go:89] found id: ""
	I0815 18:39:24.847028   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.847038   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:24.847045   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:24.847106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:24.881681   68713 cri.go:89] found id: ""
	I0815 18:39:24.881705   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.881713   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:24.881719   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:24.881772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:24.917000   68713 cri.go:89] found id: ""
	I0815 18:39:24.917024   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.917032   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:24.917040   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:24.917088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:24.951133   68713 cri.go:89] found id: ""
	I0815 18:39:24.951156   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.951164   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:24.951170   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:24.951218   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:24.987306   68713 cri.go:89] found id: ""
	I0815 18:39:24.987331   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.987339   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:24.987347   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:24.987360   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:25.039533   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:25.039566   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:25.053011   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:25.053036   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:25.125864   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:25.125884   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:25.125895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:25.209885   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:25.209916   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:27.751681   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:27.765316   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:27.765390   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:27.805820   68713 cri.go:89] found id: ""
	I0815 18:39:27.805858   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.805870   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:27.805878   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:27.805940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:27.846684   68713 cri.go:89] found id: ""
	I0815 18:39:27.846717   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.846727   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:27.846737   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:27.846801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:27.882326   68713 cri.go:89] found id: ""
	I0815 18:39:27.882358   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.882370   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:27.882378   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:27.882448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:27.917340   68713 cri.go:89] found id: ""
	I0815 18:39:27.917416   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.917431   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:27.917442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:27.917505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:27.952674   68713 cri.go:89] found id: ""
	I0815 18:39:27.952700   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.952708   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:27.952714   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:27.952763   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:27.986103   68713 cri.go:89] found id: ""
	I0815 18:39:27.986132   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.986143   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:27.986151   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:27.986212   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:28.023674   68713 cri.go:89] found id: ""
	I0815 18:39:28.023716   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.023735   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:28.023742   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:28.023807   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:28.064902   68713 cri.go:89] found id: ""
	I0815 18:39:28.064929   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.064937   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:28.064945   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:28.064957   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:28.116145   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:28.116180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:28.130435   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:28.130462   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:28.204899   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:28.204920   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:28.204931   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:28.284165   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:28.284202   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:30.824135   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:30.837515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:30.837583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:30.874671   68713 cri.go:89] found id: ""
	I0815 18:39:30.874695   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.874705   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:30.874712   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:30.874776   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:30.909990   68713 cri.go:89] found id: ""
	I0815 18:39:30.910027   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.910038   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:30.910045   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:30.910106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:30.946824   68713 cri.go:89] found id: ""
	I0815 18:39:30.946851   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.946859   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:30.946865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:30.946912   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:30.983392   68713 cri.go:89] found id: ""
	I0815 18:39:30.983419   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.983429   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:30.983437   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:30.983505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:31.023471   68713 cri.go:89] found id: ""
	I0815 18:39:31.023500   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.023510   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:31.023518   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:31.023583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:31.063586   68713 cri.go:89] found id: ""
	I0815 18:39:31.063616   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.063627   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:31.063636   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:31.063696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:31.103147   68713 cri.go:89] found id: ""
	I0815 18:39:31.103173   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.103180   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:31.103186   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:31.103237   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:31.144082   68713 cri.go:89] found id: ""
	I0815 18:39:31.144113   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.144124   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:31.144136   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:31.144150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:31.212535   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:31.212563   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:31.212586   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:31.292039   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:31.292076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:31.335023   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:31.335050   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:31.388817   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:31.388853   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:33.925861   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:33.939604   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:33.939668   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:33.974538   68713 cri.go:89] found id: ""
	I0815 18:39:33.974563   68713 logs.go:276] 0 containers: []
	W0815 18:39:33.974571   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:33.974577   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:33.974634   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:34.009017   68713 cri.go:89] found id: ""
	I0815 18:39:34.009048   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.009058   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:34.009064   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:34.009120   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:34.049478   68713 cri.go:89] found id: ""
	I0815 18:39:34.049501   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.049517   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:34.049523   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:34.049576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:34.091011   68713 cri.go:89] found id: ""
	I0815 18:39:34.091040   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.091050   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:34.091056   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:34.091114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:34.126617   68713 cri.go:89] found id: ""
	I0815 18:39:34.126640   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.126650   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:34.126657   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:34.126706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:34.168140   68713 cri.go:89] found id: ""
	I0815 18:39:34.168169   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.168179   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:34.168187   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:34.168279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:34.205052   68713 cri.go:89] found id: ""
	I0815 18:39:34.205081   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.205091   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:34.205098   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:34.205173   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:34.238474   68713 cri.go:89] found id: ""
	I0815 18:39:34.238499   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.238506   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:34.238521   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:34.238540   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:34.280574   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:34.280601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:34.332662   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:34.332704   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:34.348556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:34.348591   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:34.421428   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:34.421450   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:34.421464   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.004855   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:37.019306   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:37.019378   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:37.057588   68713 cri.go:89] found id: ""
	I0815 18:39:37.057618   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.057626   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:37.057641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:37.057706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:37.095645   68713 cri.go:89] found id: ""
	I0815 18:39:37.095678   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.095687   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:37.095693   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:37.095750   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:37.131669   68713 cri.go:89] found id: ""
	I0815 18:39:37.131696   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.131711   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:37.131717   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:37.131772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:37.185065   68713 cri.go:89] found id: ""
	I0815 18:39:37.185097   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.185108   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:37.185115   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:37.185180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:37.220220   68713 cri.go:89] found id: ""
	I0815 18:39:37.220251   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.220262   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:37.220269   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:37.220322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:37.259816   68713 cri.go:89] found id: ""
	I0815 18:39:37.259849   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.259859   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:37.259868   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:37.259920   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:37.292777   68713 cri.go:89] found id: ""
	I0815 18:39:37.292807   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.292818   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:37.292825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:37.292888   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:37.328673   68713 cri.go:89] found id: ""
	I0815 18:39:37.328703   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.328714   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:37.328725   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:37.328740   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:37.379131   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:37.379172   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:37.392982   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:37.393017   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:37.470727   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:37.470750   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:37.470766   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.552353   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:37.552386   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:40.094008   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:40.107681   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:40.107753   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:40.142229   68713 cri.go:89] found id: ""
	I0815 18:39:40.142254   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.142264   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:40.142271   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:40.142333   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:40.180622   68713 cri.go:89] found id: ""
	I0815 18:39:40.180650   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.180665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:40.180672   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:40.180733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:40.219085   68713 cri.go:89] found id: ""
	I0815 18:39:40.219113   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.219120   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:40.219126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:40.219174   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:40.254807   68713 cri.go:89] found id: ""
	I0815 18:39:40.254838   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.254850   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:40.254858   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:40.254940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:40.290438   68713 cri.go:89] found id: ""
	I0815 18:39:40.290466   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.290478   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:40.290484   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:40.290547   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:40.326320   68713 cri.go:89] found id: ""
	I0815 18:39:40.326356   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.326364   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:40.326370   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:40.326429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:40.361538   68713 cri.go:89] found id: ""
	I0815 18:39:40.361563   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.361570   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:40.361576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:40.361629   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:40.397275   68713 cri.go:89] found id: ""
	I0815 18:39:40.397304   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.397316   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:40.397326   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:40.397342   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:40.466042   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:40.466064   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:40.466078   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:40.544915   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:40.544951   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:40.584992   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:40.585019   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:40.634792   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:40.634837   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:43.149819   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:43.164578   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:43.164649   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:43.199576   68713 cri.go:89] found id: ""
	I0815 18:39:43.199621   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.199633   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:43.199641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:43.199702   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:43.233783   68713 cri.go:89] found id: ""
	I0815 18:39:43.233820   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.233833   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:43.233840   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:43.233911   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:43.269406   68713 cri.go:89] found id: ""
	I0815 18:39:43.269437   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.269449   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:43.269457   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:43.269570   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:43.310686   68713 cri.go:89] found id: ""
	I0815 18:39:43.310715   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.310726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:43.310734   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:43.310795   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:43.348662   68713 cri.go:89] found id: ""
	I0815 18:39:43.348689   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.348699   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:43.348706   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:43.348767   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:43.385676   68713 cri.go:89] found id: ""
	I0815 18:39:43.385714   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.385726   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:43.385737   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:43.385802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:43.422605   68713 cri.go:89] found id: ""
	I0815 18:39:43.422634   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.422645   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:43.422653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:43.422712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:43.463208   68713 cri.go:89] found id: ""
	I0815 18:39:43.463238   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.463249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:43.463260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:43.463278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:43.476637   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:43.476664   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:43.552239   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:43.552263   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:43.552278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:43.653055   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:43.653108   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:43.699166   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:43.699192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.251725   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:46.265164   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:46.265240   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:46.305095   68713 cri.go:89] found id: ""
	I0815 18:39:46.305123   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.305133   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:46.305140   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:46.305196   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:46.349744   68713 cri.go:89] found id: ""
	I0815 18:39:46.349773   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.349783   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:46.349790   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:46.349858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:46.385807   68713 cri.go:89] found id: ""
	I0815 18:39:46.385839   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.385847   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:46.385853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:46.385908   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:46.419977   68713 cri.go:89] found id: ""
	I0815 18:39:46.420011   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.420024   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:46.420031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:46.420093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:46.454852   68713 cri.go:89] found id: ""
	I0815 18:39:46.454883   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.454894   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:46.454901   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:46.454962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:46.497157   68713 cri.go:89] found id: ""
	I0815 18:39:46.497192   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.497202   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:46.497210   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:46.497271   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:46.530191   68713 cri.go:89] found id: ""
	I0815 18:39:46.530218   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.530226   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:46.530232   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:46.530282   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:46.566024   68713 cri.go:89] found id: ""
	I0815 18:39:46.566050   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.566063   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:46.566074   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:46.566103   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.621969   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:46.622004   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:46.636576   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:46.636603   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:46.706819   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:46.706842   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:46.706857   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:46.786589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:46.786634   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:49.324853   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:49.343543   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:49.343618   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:49.396260   68713 cri.go:89] found id: ""
	I0815 18:39:49.396292   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.396303   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:49.396311   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:49.396380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:49.437579   68713 cri.go:89] found id: ""
	I0815 18:39:49.437604   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.437612   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:49.437617   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:49.437663   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:49.476206   68713 cri.go:89] found id: ""
	I0815 18:39:49.476232   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.476239   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:49.476245   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:49.476296   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:49.511324   68713 cri.go:89] found id: ""
	I0815 18:39:49.511349   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.511357   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:49.511363   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:49.511428   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:49.545875   68713 cri.go:89] found id: ""
	I0815 18:39:49.545907   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.545916   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:49.545922   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:49.545981   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:49.582176   68713 cri.go:89] found id: ""
	I0815 18:39:49.582204   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.582228   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:49.582246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:49.582309   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:49.623288   68713 cri.go:89] found id: ""
	I0815 18:39:49.623318   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.623327   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:49.623333   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:49.623394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:49.662352   68713 cri.go:89] found id: ""
	I0815 18:39:49.662377   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.662389   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:49.662399   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:49.662424   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:49.745582   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:49.745617   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:49.785256   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:49.785295   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:49.835944   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:49.835979   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:49.852859   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:49.852886   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:49.928427   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.429223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:52.442384   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:52.442460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:52.480515   68713 cri.go:89] found id: ""
	I0815 18:39:52.480543   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.480553   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:52.480558   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:52.480605   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:52.518346   68713 cri.go:89] found id: ""
	I0815 18:39:52.518382   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.518393   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:52.518401   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:52.518460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:52.557696   68713 cri.go:89] found id: ""
	I0815 18:39:52.557722   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.557731   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:52.557736   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:52.557786   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:52.590849   68713 cri.go:89] found id: ""
	I0815 18:39:52.590879   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.590890   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:52.590898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:52.590961   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:52.629950   68713 cri.go:89] found id: ""
	I0815 18:39:52.629980   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.629992   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:52.629999   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:52.630047   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:52.666039   68713 cri.go:89] found id: ""
	I0815 18:39:52.666070   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.666081   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:52.666089   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:52.666146   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:52.699917   68713 cri.go:89] found id: ""
	I0815 18:39:52.699941   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.699949   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:52.699955   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:52.700001   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:52.735944   68713 cri.go:89] found id: ""
	I0815 18:39:52.735973   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.735981   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:52.735989   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:52.736001   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:52.805519   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.805537   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:52.805559   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:52.894175   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:52.894213   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:52.932974   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:52.933006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:52.984206   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:52.984244   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.498477   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:55.511319   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:55.511380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:55.544899   68713 cri.go:89] found id: ""
	I0815 18:39:55.544928   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.544936   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:55.544943   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:55.545003   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:55.578821   68713 cri.go:89] found id: ""
	I0815 18:39:55.578855   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.578864   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:55.578869   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:55.578922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:55.615392   68713 cri.go:89] found id: ""
	I0815 18:39:55.615422   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.615434   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:55.615441   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:55.615501   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:55.653456   68713 cri.go:89] found id: ""
	I0815 18:39:55.653482   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.653493   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:55.653500   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:55.653558   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:55.687716   68713 cri.go:89] found id: ""
	I0815 18:39:55.687741   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.687749   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:55.687755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:55.687802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:55.725518   68713 cri.go:89] found id: ""
	I0815 18:39:55.725543   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.725553   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:55.725561   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:55.725631   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:55.758451   68713 cri.go:89] found id: ""
	I0815 18:39:55.758479   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.758490   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:55.758498   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:55.758560   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:55.792653   68713 cri.go:89] found id: ""
	I0815 18:39:55.792680   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.792687   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:55.792699   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:55.792710   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:55.832127   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:55.832156   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:55.885255   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:55.885289   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.898980   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:55.899009   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:55.967579   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:55.967609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:55.967624   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:58.543524   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:58.556338   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:58.556412   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:58.593359   68713 cri.go:89] found id: ""
	I0815 18:39:58.593390   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.593401   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:58.593409   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:58.593472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:58.628446   68713 cri.go:89] found id: ""
	I0815 18:39:58.628471   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.628481   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:58.628504   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:58.628567   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:58.663930   68713 cri.go:89] found id: ""
	I0815 18:39:58.663954   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.663964   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:58.663971   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:58.664028   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:58.701070   68713 cri.go:89] found id: ""
	I0815 18:39:58.701095   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.701103   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:58.701108   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:58.701156   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:58.734427   68713 cri.go:89] found id: ""
	I0815 18:39:58.734457   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.734468   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:58.734476   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:58.734543   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:58.769121   68713 cri.go:89] found id: ""
	I0815 18:39:58.769144   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.769152   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:58.769162   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:58.769215   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:58.805771   68713 cri.go:89] found id: ""
	I0815 18:39:58.805796   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.805803   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:58.805808   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:58.805856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:58.840288   68713 cri.go:89] found id: ""
	I0815 18:39:58.840315   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.840325   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:58.840336   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:58.840351   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:58.895856   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:58.895893   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:58.909453   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:58.909478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:58.975939   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:58.975960   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:58.975971   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:59.055318   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:59.055353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.595588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:01.608625   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:01.608690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:01.646105   68713 cri.go:89] found id: ""
	I0815 18:40:01.646133   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.646144   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:01.646151   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:01.646214   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:01.685162   68713 cri.go:89] found id: ""
	I0815 18:40:01.685192   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.685202   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:01.685210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:01.685261   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:01.721452   68713 cri.go:89] found id: ""
	I0815 18:40:01.721479   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.721499   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:01.721507   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:01.721576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:01.762288   68713 cri.go:89] found id: ""
	I0815 18:40:01.762318   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.762331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:01.762339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:01.762429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:01.800547   68713 cri.go:89] found id: ""
	I0815 18:40:01.800579   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.800590   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:01.800598   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:01.800660   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:01.839182   68713 cri.go:89] found id: ""
	I0815 18:40:01.839214   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.839223   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:01.839229   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:01.839294   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:01.875364   68713 cri.go:89] found id: ""
	I0815 18:40:01.875390   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.875398   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:01.875404   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:01.875452   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:01.910485   68713 cri.go:89] found id: ""
	I0815 18:40:01.910512   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.910521   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:01.910535   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:01.910547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.951970   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:01.951998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:02.005720   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:02.005764   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:02.020941   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:02.020969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:02.101206   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:02.101224   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:02.101236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:04.687482   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:04.701501   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:04.701562   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:04.739613   68713 cri.go:89] found id: ""
	I0815 18:40:04.739636   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.739644   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:04.739650   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:04.739704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:04.774419   68713 cri.go:89] found id: ""
	I0815 18:40:04.774443   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.774453   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:04.774460   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:04.774522   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:04.809516   68713 cri.go:89] found id: ""
	I0815 18:40:04.809538   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.809547   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:04.809552   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:04.809612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:04.843822   68713 cri.go:89] found id: ""
	I0815 18:40:04.843850   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.843870   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:04.843878   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:04.843942   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:04.883853   68713 cri.go:89] found id: ""
	I0815 18:40:04.883881   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.883892   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:04.883900   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:04.883962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:04.918811   68713 cri.go:89] found id: ""
	I0815 18:40:04.918838   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.918846   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:04.918852   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:04.918903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:04.953076   68713 cri.go:89] found id: ""
	I0815 18:40:04.953101   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.953110   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:04.953116   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:04.953163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:04.988219   68713 cri.go:89] found id: ""
	I0815 18:40:04.988246   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.988255   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:04.988264   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:04.988275   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:05.060859   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:05.060896   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:05.060913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:05.146768   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:05.146817   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:05.187816   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:05.187845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:05.239027   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:05.239067   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:07.754503   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:07.769608   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:07.769695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:07.804435   68713 cri.go:89] found id: ""
	I0815 18:40:07.804460   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.804468   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:07.804474   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:07.804551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:07.839760   68713 cri.go:89] found id: ""
	I0815 18:40:07.839787   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.839797   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:07.839804   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:07.839868   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:07.877984   68713 cri.go:89] found id: ""
	I0815 18:40:07.878009   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.878017   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:07.878022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:07.878070   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:07.914294   68713 cri.go:89] found id: ""
	I0815 18:40:07.914319   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.914328   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:07.914336   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:07.914395   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:07.948751   68713 cri.go:89] found id: ""
	I0815 18:40:07.948777   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.948787   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:07.948795   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:07.948861   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:07.982262   68713 cri.go:89] found id: ""
	I0815 18:40:07.982288   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.982296   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:07.982302   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:07.982358   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:08.015560   68713 cri.go:89] found id: ""
	I0815 18:40:08.015588   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.015596   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:08.015602   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:08.015662   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:08.049854   68713 cri.go:89] found id: ""
	I0815 18:40:08.049878   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.049885   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:08.049893   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:08.049905   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:08.102269   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:08.102303   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:08.117181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:08.117209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:08.188586   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:08.188609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:08.188623   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:08.272204   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:08.272239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:10.813223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:10.826181   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:10.826257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:10.863728   68713 cri.go:89] found id: ""
	I0815 18:40:10.863753   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.863761   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:10.863766   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:10.863813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:10.898074   68713 cri.go:89] found id: ""
	I0815 18:40:10.898102   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.898113   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:10.898121   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:10.898183   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:10.933948   68713 cri.go:89] found id: ""
	I0815 18:40:10.933980   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.933991   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:10.933998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:10.934059   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:10.972402   68713 cri.go:89] found id: ""
	I0815 18:40:10.972428   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.972436   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:10.972442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:10.972509   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:11.006814   68713 cri.go:89] found id: ""
	I0815 18:40:11.006843   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.006851   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:11.006857   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:11.006909   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:11.042739   68713 cri.go:89] found id: ""
	I0815 18:40:11.042763   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.042771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:11.042777   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:11.042835   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:11.079132   68713 cri.go:89] found id: ""
	I0815 18:40:11.079164   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.079173   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:11.079179   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:11.079228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:11.113271   68713 cri.go:89] found id: ""
	I0815 18:40:11.113298   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.113309   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:11.113317   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:11.113328   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:11.166669   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:11.166698   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:11.180789   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:11.180815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:11.247954   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:11.247985   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:11.247999   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:11.331952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:11.331995   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:13.874466   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:13.888346   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:13.888416   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:13.922542   68713 cri.go:89] found id: ""
	I0815 18:40:13.922569   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.922579   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:13.922586   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:13.922654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:13.958039   68713 cri.go:89] found id: ""
	I0815 18:40:13.958066   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.958076   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:13.958082   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:13.958131   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:13.994095   68713 cri.go:89] found id: ""
	I0815 18:40:13.994125   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.994136   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:13.994144   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:13.994195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:14.027918   68713 cri.go:89] found id: ""
	I0815 18:40:14.027949   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.027960   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:14.027969   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:14.028027   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:14.063849   68713 cri.go:89] found id: ""
	I0815 18:40:14.063879   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.063889   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:14.063897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:14.063957   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:14.098444   68713 cri.go:89] found id: ""
	I0815 18:40:14.098473   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.098483   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:14.098490   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:14.098553   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:14.136834   68713 cri.go:89] found id: ""
	I0815 18:40:14.136861   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.136874   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:14.136880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:14.136925   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:14.172377   68713 cri.go:89] found id: ""
	I0815 18:40:14.172400   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.172408   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:14.172415   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:14.172430   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:14.212212   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:14.212242   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:14.268412   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:14.268450   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:14.282978   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:14.283006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:14.352777   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:14.352796   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:14.352822   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:16.939906   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:16.953118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:16.953178   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:16.991697   68713 cri.go:89] found id: ""
	I0815 18:40:16.991723   68713 logs.go:276] 0 containers: []
	W0815 18:40:16.991731   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:16.991736   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:16.991801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:17.027572   68713 cri.go:89] found id: ""
	I0815 18:40:17.027602   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.027613   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:17.027623   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:17.027682   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:17.060718   68713 cri.go:89] found id: ""
	I0815 18:40:17.060750   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.060763   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:17.060771   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:17.060829   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:17.096746   68713 cri.go:89] found id: ""
	I0815 18:40:17.096771   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.096780   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:17.096786   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:17.096846   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:17.130755   68713 cri.go:89] found id: ""
	I0815 18:40:17.130791   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.130802   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:17.130810   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:17.130872   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:17.167991   68713 cri.go:89] found id: ""
	I0815 18:40:17.168016   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.168026   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:17.168034   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:17.168093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:17.200695   68713 cri.go:89] found id: ""
	I0815 18:40:17.200722   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.200733   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:17.200741   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:17.200799   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:17.237788   68713 cri.go:89] found id: ""
	I0815 18:40:17.237816   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.237824   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:17.237833   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:17.237848   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:17.288888   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:17.288921   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:17.302862   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:17.302903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:17.370062   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:17.370085   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:17.370100   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:17.444742   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:17.444781   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:19.984813   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:19.998010   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:19.998077   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:20.032880   68713 cri.go:89] found id: ""
	I0815 18:40:20.032903   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.032912   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:20.032918   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:20.032973   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:20.069191   68713 cri.go:89] found id: ""
	I0815 18:40:20.069224   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.069236   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:20.069243   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:20.069301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:20.101930   68713 cri.go:89] found id: ""
	I0815 18:40:20.101954   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.101962   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:20.101968   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:20.102016   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:20.136981   68713 cri.go:89] found id: ""
	I0815 18:40:20.137006   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.137014   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:20.137020   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:20.137066   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:20.174517   68713 cri.go:89] found id: ""
	I0815 18:40:20.174543   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.174550   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:20.174556   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:20.174611   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:20.208525   68713 cri.go:89] found id: ""
	I0815 18:40:20.208549   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.208559   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:20.208567   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:20.208626   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:20.240824   68713 cri.go:89] found id: ""
	I0815 18:40:20.240855   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.240867   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:20.240874   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:20.240946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:20.277683   68713 cri.go:89] found id: ""
	I0815 18:40:20.277710   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.277720   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:20.277728   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:20.277739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:20.324271   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:20.324304   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:20.376250   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:20.376285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:20.392777   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:20.392813   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:20.464122   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:20.464156   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:20.464180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:23.041684   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:23.055779   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:23.055858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:23.095391   68713 cri.go:89] found id: ""
	I0815 18:40:23.095414   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.095426   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:23.095432   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:23.095483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:23.134907   68713 cri.go:89] found id: ""
	I0815 18:40:23.134936   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.134943   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:23.134949   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:23.134994   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:23.171806   68713 cri.go:89] found id: ""
	I0815 18:40:23.171845   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.171854   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:23.171861   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:23.171924   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:23.205378   68713 cri.go:89] found id: ""
	I0815 18:40:23.205404   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.205412   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:23.205417   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:23.205467   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:23.239503   68713 cri.go:89] found id: ""
	I0815 18:40:23.239531   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.239540   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:23.239547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:23.239614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:23.275802   68713 cri.go:89] found id: ""
	I0815 18:40:23.275828   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.275842   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:23.275849   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:23.275894   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:23.310127   68713 cri.go:89] found id: ""
	I0815 18:40:23.310154   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.310167   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:23.310173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:23.310219   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:23.344646   68713 cri.go:89] found id: ""
	I0815 18:40:23.344674   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.344685   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:23.344696   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:23.344711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:23.397260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:23.397310   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:23.425518   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:23.425553   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:23.495528   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:23.495547   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:23.495562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:23.574489   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:23.574524   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.119044   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:26.133806   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:26.133880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:26.175683   68713 cri.go:89] found id: ""
	I0815 18:40:26.175711   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.175722   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:26.175730   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:26.175789   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:26.210634   68713 cri.go:89] found id: ""
	I0815 18:40:26.210658   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.210665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:26.210671   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:26.210724   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:26.244146   68713 cri.go:89] found id: ""
	I0815 18:40:26.244176   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.244187   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:26.244195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:26.244274   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:26.277312   68713 cri.go:89] found id: ""
	I0815 18:40:26.277335   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.277343   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:26.277349   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:26.277410   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:26.311538   68713 cri.go:89] found id: ""
	I0815 18:40:26.311562   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.311570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:26.311576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:26.311623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:26.347816   68713 cri.go:89] found id: ""
	I0815 18:40:26.347840   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.347847   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:26.347853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:26.347906   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:26.381211   68713 cri.go:89] found id: ""
	I0815 18:40:26.381234   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.381242   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:26.381248   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:26.381303   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:26.413982   68713 cri.go:89] found id: ""
	I0815 18:40:26.414010   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.414018   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:26.414027   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:26.414038   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:26.500686   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:26.500721   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.537615   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:26.537642   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:26.590119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:26.590150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:26.603713   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:26.603739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:26.675455   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:29.176084   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:29.189743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:29.189813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:29.225500   68713 cri.go:89] found id: ""
	I0815 18:40:29.225536   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.225548   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:29.225557   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:29.225614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:29.261839   68713 cri.go:89] found id: ""
	I0815 18:40:29.261866   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.261877   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:29.261884   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:29.261946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:29.296685   68713 cri.go:89] found id: ""
	I0815 18:40:29.296708   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.296716   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:29.296728   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:29.296787   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:29.332524   68713 cri.go:89] found id: ""
	I0815 18:40:29.332550   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.332558   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:29.332564   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:29.332615   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:29.368918   68713 cri.go:89] found id: ""
	I0815 18:40:29.368943   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.368953   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:29.368961   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:29.369020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:29.403175   68713 cri.go:89] found id: ""
	I0815 18:40:29.403200   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.403211   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:29.403218   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:29.403279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:29.438957   68713 cri.go:89] found id: ""
	I0815 18:40:29.438981   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.438989   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:29.438994   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:29.439051   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:29.472153   68713 cri.go:89] found id: ""
	I0815 18:40:29.472184   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.472195   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:29.472206   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:29.472221   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:29.560484   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:29.560547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:29.600366   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:29.600402   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:29.656536   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:29.656569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:29.669899   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:29.669925   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:29.738515   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:32.239207   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:32.253976   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:32.254048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:32.290918   68713 cri.go:89] found id: ""
	I0815 18:40:32.290942   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.290951   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:32.290957   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:32.291009   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:32.325567   68713 cri.go:89] found id: ""
	I0815 18:40:32.325596   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.325606   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:32.325613   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:32.325674   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:32.360959   68713 cri.go:89] found id: ""
	I0815 18:40:32.360994   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.361005   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:32.361015   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:32.361090   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:32.398583   68713 cri.go:89] found id: ""
	I0815 18:40:32.398614   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.398625   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:32.398633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:32.398696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:32.432980   68713 cri.go:89] found id: ""
	I0815 18:40:32.433007   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.433017   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:32.433024   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:32.433088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:32.467645   68713 cri.go:89] found id: ""
	I0815 18:40:32.467678   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.467688   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:32.467697   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:32.467757   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:32.504233   68713 cri.go:89] found id: ""
	I0815 18:40:32.504265   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.504275   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:32.504282   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:32.504347   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:32.539127   68713 cri.go:89] found id: ""
	I0815 18:40:32.539160   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.539175   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:32.539186   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:32.539200   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:32.620782   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:32.620818   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:32.660920   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:32.660950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:32.714392   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:32.714425   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:32.727629   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:32.727655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:32.801258   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.301393   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:35.315460   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:35.315515   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:35.352266   68713 cri.go:89] found id: ""
	I0815 18:40:35.352287   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.352295   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:35.352301   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:35.352345   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:35.387274   68713 cri.go:89] found id: ""
	I0815 18:40:35.387305   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.387316   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:35.387324   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:35.387386   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:35.422376   68713 cri.go:89] found id: ""
	I0815 18:40:35.422403   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.422413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:35.422419   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:35.422464   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:35.456423   68713 cri.go:89] found id: ""
	I0815 18:40:35.456452   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.456459   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:35.456465   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:35.456544   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:35.494878   68713 cri.go:89] found id: ""
	I0815 18:40:35.494903   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.494912   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:35.494919   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:35.494980   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:35.528027   68713 cri.go:89] found id: ""
	I0815 18:40:35.528051   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.528062   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:35.528069   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:35.528128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:35.568543   68713 cri.go:89] found id: ""
	I0815 18:40:35.568570   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.568580   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:35.568587   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:35.568654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:35.627717   68713 cri.go:89] found id: ""
	I0815 18:40:35.627747   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.627766   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:35.627777   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:35.627792   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:35.691497   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:35.691530   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:35.705062   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:35.705092   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:35.783785   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.783806   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:35.783819   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:35.867282   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:35.867317   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:38.407940   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:38.421571   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:38.421648   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:38.456551   68713 cri.go:89] found id: ""
	I0815 18:40:38.456586   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.456597   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:38.456604   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:38.456665   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:38.494133   68713 cri.go:89] found id: ""
	I0815 18:40:38.494167   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.494179   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:38.494186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:38.494253   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:38.531566   68713 cri.go:89] found id: ""
	I0815 18:40:38.531599   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.531610   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:38.531617   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:38.531678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:38.567613   68713 cri.go:89] found id: ""
	I0815 18:40:38.567640   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.567652   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:38.567659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:38.567717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:38.603172   68713 cri.go:89] found id: ""
	I0815 18:40:38.603201   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.603212   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:38.603225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:38.603284   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:38.639600   68713 cri.go:89] found id: ""
	I0815 18:40:38.639629   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.639640   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:38.639648   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:38.639710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:38.675780   68713 cri.go:89] found id: ""
	I0815 18:40:38.675811   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.675821   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:38.675828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:38.675885   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:38.708745   68713 cri.go:89] found id: ""
	I0815 18:40:38.708775   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.708786   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:38.708796   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:38.708815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:38.722485   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:38.722514   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:38.793913   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:38.793936   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:38.793950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:38.880706   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:38.880744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:38.919505   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:38.919533   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.472452   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:41.486204   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:41.486264   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:41.520251   68713 cri.go:89] found id: ""
	I0815 18:40:41.520282   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.520294   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:41.520302   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:41.520362   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:41.561294   68713 cri.go:89] found id: ""
	I0815 18:40:41.561325   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.561336   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:41.561343   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:41.561403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:41.595290   68713 cri.go:89] found id: ""
	I0815 18:40:41.595318   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.595326   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:41.595331   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:41.595381   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:41.629706   68713 cri.go:89] found id: ""
	I0815 18:40:41.629736   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.629744   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:41.629750   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:41.629816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:41.671862   68713 cri.go:89] found id: ""
	I0815 18:40:41.671885   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.671893   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:41.671898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:41.671951   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:41.710298   68713 cri.go:89] found id: ""
	I0815 18:40:41.710349   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.710360   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:41.710368   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:41.710425   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:41.745434   68713 cri.go:89] found id: ""
	I0815 18:40:41.745472   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.745487   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:41.745492   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:41.745548   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:41.781038   68713 cri.go:89] found id: ""
	I0815 18:40:41.781073   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.781081   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:41.781088   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:41.781099   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:41.863977   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:41.864023   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:41.907477   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:41.907505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.962921   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:41.962956   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:41.976458   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:41.976505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:42.044372   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:44.544803   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:44.559538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:44.559595   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:44.595471   68713 cri.go:89] found id: ""
	I0815 18:40:44.595501   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.595511   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:44.595518   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:44.595581   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:44.630148   68713 cri.go:89] found id: ""
	I0815 18:40:44.630173   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.630181   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:44.630189   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:44.630245   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:44.666084   68713 cri.go:89] found id: ""
	I0815 18:40:44.666110   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.666119   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:44.666126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:44.666180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:44.700286   68713 cri.go:89] found id: ""
	I0815 18:40:44.700320   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.700331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:44.700339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:44.700394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:44.734115   68713 cri.go:89] found id: ""
	I0815 18:40:44.734143   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.734151   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:44.734157   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:44.734216   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:44.770306   68713 cri.go:89] found id: ""
	I0815 18:40:44.770363   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.770376   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:44.770383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:44.770453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:44.806766   68713 cri.go:89] found id: ""
	I0815 18:40:44.806790   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.806798   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:44.806803   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:44.806865   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:44.843574   68713 cri.go:89] found id: ""
	I0815 18:40:44.843603   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.843613   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:44.843623   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:44.843638   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:44.896119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:44.896148   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:44.909537   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:44.909562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:44.980268   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:44.980290   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:44.980307   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:45.066589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:45.066626   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:47.605934   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:47.620644   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:47.620709   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:47.660939   68713 cri.go:89] found id: ""
	I0815 18:40:47.660960   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.660967   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:47.660973   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:47.661021   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:47.701018   68713 cri.go:89] found id: ""
	I0815 18:40:47.701047   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.701059   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:47.701107   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:47.701177   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:47.739487   68713 cri.go:89] found id: ""
	I0815 18:40:47.739514   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.739523   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:47.739528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:47.739584   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:47.781483   68713 cri.go:89] found id: ""
	I0815 18:40:47.781508   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.781515   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:47.781520   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:47.781571   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:47.816781   68713 cri.go:89] found id: ""
	I0815 18:40:47.816806   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.816813   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:47.816819   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:47.816875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:47.853951   68713 cri.go:89] found id: ""
	I0815 18:40:47.853976   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.853984   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:47.853990   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:47.854062   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:47.892208   68713 cri.go:89] found id: ""
	I0815 18:40:47.892237   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.892246   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:47.892252   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:47.892311   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:47.926916   68713 cri.go:89] found id: ""
	I0815 18:40:47.926944   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.926965   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:47.926976   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:47.926990   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:48.002907   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:48.002927   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:48.002942   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:48.085727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:48.085762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:48.127192   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:48.127224   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:48.180172   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:48.180208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:50.694573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:50.709411   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:50.709472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:50.750956   68713 cri.go:89] found id: ""
	I0815 18:40:50.750985   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.750994   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:50.751000   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:50.751048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:50.791072   68713 cri.go:89] found id: ""
	I0815 18:40:50.791149   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.791174   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:50.791186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:50.791247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:50.827692   68713 cri.go:89] found id: ""
	I0815 18:40:50.827717   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.827728   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:50.827735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:50.827794   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:50.866587   68713 cri.go:89] found id: ""
	I0815 18:40:50.866616   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.866626   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:50.866633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:50.866692   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:50.907012   68713 cri.go:89] found id: ""
	I0815 18:40:50.907040   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.907047   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:50.907053   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:50.907101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:50.951212   68713 cri.go:89] found id: ""
	I0815 18:40:50.951243   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.951256   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:50.951263   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:50.951316   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:50.989771   68713 cri.go:89] found id: ""
	I0815 18:40:50.989802   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.989812   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:50.989818   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:50.989867   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:51.024423   68713 cri.go:89] found id: ""
	I0815 18:40:51.024454   68713 logs.go:276] 0 containers: []
	W0815 18:40:51.024465   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:51.024475   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:51.024500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:51.076973   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:51.077012   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:51.090963   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:51.090989   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:51.169981   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:51.170005   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:51.170029   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:51.248990   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:51.249040   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:53.790172   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:53.803752   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:53.803816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:53.843203   68713 cri.go:89] found id: ""
	I0815 18:40:53.843231   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.843246   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:53.843254   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:53.843314   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:53.878975   68713 cri.go:89] found id: ""
	I0815 18:40:53.879000   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.879008   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:53.879013   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:53.879078   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:53.915640   68713 cri.go:89] found id: ""
	I0815 18:40:53.915668   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.915675   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:53.915683   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:53.915746   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:53.956312   68713 cri.go:89] found id: ""
	I0815 18:40:53.956340   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.956356   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:53.956365   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:53.956426   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:53.992276   68713 cri.go:89] found id: ""
	I0815 18:40:53.992304   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.992314   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:53.992322   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:53.992387   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:54.034653   68713 cri.go:89] found id: ""
	I0815 18:40:54.034682   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.034693   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:54.034701   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:54.034761   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:54.072993   68713 cri.go:89] found id: ""
	I0815 18:40:54.073018   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.073027   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:54.073038   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:54.073107   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:54.107414   68713 cri.go:89] found id: ""
	I0815 18:40:54.107446   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.107456   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:54.107466   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:54.107481   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:54.145900   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:54.145928   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:54.197609   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:54.197639   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:54.211384   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:54.211410   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:54.280991   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:54.281018   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:54.281031   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:56.868270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:56.881168   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:56.881248   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:56.915206   68713 cri.go:89] found id: ""
	I0815 18:40:56.915235   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.915243   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:56.915249   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:56.915308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:56.950838   68713 cri.go:89] found id: ""
	I0815 18:40:56.950864   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.950873   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:56.950879   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:56.950937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:56.993625   68713 cri.go:89] found id: ""
	I0815 18:40:56.993649   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.993656   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:56.993662   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:56.993718   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:57.029109   68713 cri.go:89] found id: ""
	I0815 18:40:57.029139   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.029150   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:57.029158   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:57.029213   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:57.063480   68713 cri.go:89] found id: ""
	I0815 18:40:57.063518   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.063530   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:57.063538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:57.063598   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:57.102830   68713 cri.go:89] found id: ""
	I0815 18:40:57.102859   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.102870   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:57.102877   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:57.102938   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:57.137116   68713 cri.go:89] found id: ""
	I0815 18:40:57.137146   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.137159   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:57.137173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:57.137235   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:57.174678   68713 cri.go:89] found id: ""
	I0815 18:40:57.174706   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.174717   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:57.174727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:57.174741   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:57.213270   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:57.213311   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:57.269463   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:57.269500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:57.283891   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:57.283915   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:57.355563   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:57.355589   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:57.355601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:59.943493   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:59.957225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:59.957285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:59.993113   68713 cri.go:89] found id: ""
	I0815 18:40:59.993142   68713 logs.go:276] 0 containers: []
	W0815 18:40:59.993153   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:59.993167   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:59.993228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:00.033485   68713 cri.go:89] found id: ""
	I0815 18:41:00.033515   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.033525   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:00.033533   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:00.033594   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:00.070808   68713 cri.go:89] found id: ""
	I0815 18:41:00.070830   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.070838   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:00.070844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:00.070893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:00.113043   68713 cri.go:89] found id: ""
	I0815 18:41:00.113067   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.113076   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:00.113082   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:00.113139   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:00.148089   68713 cri.go:89] found id: ""
	I0815 18:41:00.148118   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.148129   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:00.148136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:00.148206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:00.188343   68713 cri.go:89] found id: ""
	I0815 18:41:00.188375   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.188386   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:00.188394   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:00.188448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:00.224287   68713 cri.go:89] found id: ""
	I0815 18:41:00.224312   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.224323   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:00.224337   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:00.224398   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:00.263983   68713 cri.go:89] found id: ""
	I0815 18:41:00.264008   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.264016   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:00.264025   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:00.264037   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:00.278057   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:00.278083   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:00.355112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:00.355133   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:00.355146   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:00.436636   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:00.436672   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:00.474774   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:00.474801   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:03.027434   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:03.041422   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:03.041496   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:03.074093   68713 cri.go:89] found id: ""
	I0815 18:41:03.074119   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.074130   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:03.074138   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:03.074198   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:03.111489   68713 cri.go:89] found id: ""
	I0815 18:41:03.111517   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.111529   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:03.111537   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:03.111599   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:03.147716   68713 cri.go:89] found id: ""
	I0815 18:41:03.147747   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.147756   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:03.147762   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:03.147825   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:03.184609   68713 cri.go:89] found id: ""
	I0815 18:41:03.184635   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.184644   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:03.184652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:03.184710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:03.221839   68713 cri.go:89] found id: ""
	I0815 18:41:03.221869   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.221878   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:03.221883   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:03.221935   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:03.262619   68713 cri.go:89] found id: ""
	I0815 18:41:03.262649   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.262661   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:03.262669   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:03.262733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:03.297826   68713 cri.go:89] found id: ""
	I0815 18:41:03.297849   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.297864   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:03.297875   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:03.297922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:03.345046   68713 cri.go:89] found id: ""
	I0815 18:41:03.345074   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.345083   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:03.345095   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:03.345133   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:03.416878   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:03.416905   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:03.416920   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:03.491548   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:03.491583   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:03.533821   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:03.533852   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:03.587749   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:03.587787   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.104002   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:06.118123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:06.118195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:06.156179   68713 cri.go:89] found id: ""
	I0815 18:41:06.156204   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.156213   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:06.156218   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:06.156275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:06.192834   68713 cri.go:89] found id: ""
	I0815 18:41:06.192858   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.192866   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:06.192871   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:06.192918   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:06.228355   68713 cri.go:89] found id: ""
	I0815 18:41:06.228379   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.228387   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:06.228393   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:06.228453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:06.262041   68713 cri.go:89] found id: ""
	I0815 18:41:06.262068   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.262079   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:06.262086   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:06.262152   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:06.303217   68713 cri.go:89] found id: ""
	I0815 18:41:06.303249   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.303261   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:06.303268   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:06.303335   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:06.337180   68713 cri.go:89] found id: ""
	I0815 18:41:06.337208   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.337215   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:06.337222   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:06.337270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:06.375054   68713 cri.go:89] found id: ""
	I0815 18:41:06.375081   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.375088   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:06.375095   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:06.375163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:06.412188   68713 cri.go:89] found id: ""
	I0815 18:41:06.412216   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.412227   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:06.412239   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:06.412255   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.425607   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:06.425633   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:06.500853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:06.500872   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:06.500883   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:06.577297   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:06.577333   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:06.620209   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:06.620239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:09.171606   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:09.184197   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:09.184257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:09.217865   68713 cri.go:89] found id: ""
	I0815 18:41:09.217893   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.217904   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:09.217912   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:09.217967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:09.254032   68713 cri.go:89] found id: ""
	I0815 18:41:09.254055   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.254064   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:09.254073   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:09.254128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:09.291772   68713 cri.go:89] found id: ""
	I0815 18:41:09.291798   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.291808   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:09.291816   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:09.291880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:09.326695   68713 cri.go:89] found id: ""
	I0815 18:41:09.326717   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.326726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:09.326731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:09.326791   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:09.365779   68713 cri.go:89] found id: ""
	I0815 18:41:09.365807   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.365818   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:09.365825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:09.365880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:09.413475   68713 cri.go:89] found id: ""
	I0815 18:41:09.413500   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.413509   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:09.413514   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:09.413578   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:09.449483   68713 cri.go:89] found id: ""
	I0815 18:41:09.449511   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.449521   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:09.449528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:09.449623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:09.487484   68713 cri.go:89] found id: ""
	I0815 18:41:09.487513   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.487525   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:09.487535   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:09.487549   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:09.536746   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:09.536777   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:09.549912   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:09.549944   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:09.619192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:09.619227   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:09.619246   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:09.698370   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:09.698404   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:12.240745   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:12.254814   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:12.254875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:12.291346   68713 cri.go:89] found id: ""
	I0815 18:41:12.291376   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.291387   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:12.291395   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:12.291456   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:12.324832   68713 cri.go:89] found id: ""
	I0815 18:41:12.324867   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.324878   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:12.324886   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:12.324950   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:12.360172   68713 cri.go:89] found id: ""
	I0815 18:41:12.360193   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.360201   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:12.360206   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:12.360251   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:12.394671   68713 cri.go:89] found id: ""
	I0815 18:41:12.394700   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.394710   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:12.394731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:12.394800   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:12.428951   68713 cri.go:89] found id: ""
	I0815 18:41:12.428999   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.429007   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:12.429013   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:12.429057   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:12.466035   68713 cri.go:89] found id: ""
	I0815 18:41:12.466061   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.466069   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:12.466075   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:12.466125   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:12.500003   68713 cri.go:89] found id: ""
	I0815 18:41:12.500031   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.500042   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:12.500050   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:12.500105   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:12.537433   68713 cri.go:89] found id: ""
	I0815 18:41:12.537457   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.537464   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:12.537473   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:12.537484   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:12.586768   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:12.586809   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:12.600549   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:12.600578   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:12.673112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:12.673138   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:12.673154   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:12.754689   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:12.754726   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:15.294667   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:15.307758   68713 kubeadm.go:597] duration metric: took 4m2.67500099s to restartPrimaryControlPlane
	W0815 18:41:15.307840   68713 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 18:41:15.307872   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:41:15.761255   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:15.776049   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:41:15.786643   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:41:15.796517   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:41:15.796537   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:41:15.796585   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:41:15.806118   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:41:15.806167   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:41:15.816363   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:41:15.826396   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:41:15.826449   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:41:15.836538   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.847035   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:41:15.847093   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.857475   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:41:15.867084   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:41:15.867144   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:41:15.879736   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:41:15.954497   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:41:15.954588   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:41:16.098128   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:41:16.098244   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:41:16.098345   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:41:16.288507   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:41:16.290439   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:41:16.290555   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:41:16.290656   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:41:16.290756   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:41:16.290831   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:41:16.290923   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:41:16.291003   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:41:16.291096   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:41:16.291182   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:41:16.291280   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:41:16.291396   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:41:16.291457   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:41:16.291509   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:41:16.363570   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:41:16.549782   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:41:16.789250   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:41:16.983388   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:41:17.004293   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:41:17.006438   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:41:17.006485   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:41:17.154583   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:41:17.156594   68713 out.go:235]   - Booting up control plane ...
	I0815 18:41:17.156717   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:41:17.177351   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:41:17.179286   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:41:17.180313   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:41:17.183829   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:41:57.184855   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:41:57.185437   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:41:57.185667   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:42:02.186077   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:02.186272   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:42:12.186839   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:12.187041   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:42:32.187938   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:32.188123   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.189799   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:43:12.190012   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.190023   68713 kubeadm.go:310] 
	I0815 18:43:12.190075   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:43:12.190133   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:43:12.190148   68713 kubeadm.go:310] 
	I0815 18:43:12.190205   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:43:12.190265   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:43:12.190394   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:43:12.190403   68713 kubeadm.go:310] 
	I0815 18:43:12.190523   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:43:12.190571   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:43:12.190627   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:43:12.190636   68713 kubeadm.go:310] 
	I0815 18:43:12.190772   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:43:12.190928   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:43:12.190950   68713 kubeadm.go:310] 
	I0815 18:43:12.191108   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:43:12.191218   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:43:12.191344   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:43:12.191478   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:43:12.191504   68713 kubeadm.go:310] 
	I0815 18:43:12.192283   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:43:12.192421   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:43:12.192523   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0815 18:43:12.192655   68713 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 18:43:12.192699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:43:12.658571   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:43:12.675797   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:43:12.687340   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:43:12.687370   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:43:12.687422   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:43:12.698401   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:43:12.698464   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:43:12.709632   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:43:12.720330   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:43:12.720386   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:43:12.731593   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.742122   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:43:12.742185   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.753042   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:43:12.762799   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:43:12.762855   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:43:12.772788   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:43:12.987927   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:45:08.956975   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:45:08.957069   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 18:45:08.958834   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:45:08.958904   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:45:08.958993   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:45:08.959133   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:45:08.959280   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:45:08.959376   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:45:08.961205   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:45:08.961294   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:45:08.961352   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:45:08.961424   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:45:08.961475   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:45:08.961536   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:45:08.961581   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:45:08.961637   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:45:08.961689   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:45:08.961795   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:45:08.961910   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:45:08.961971   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:45:08.962030   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:45:08.962078   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:45:08.962127   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:45:08.962214   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:45:08.962316   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:45:08.962448   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:45:08.962565   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:45:08.962626   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:45:08.962724   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:45:08.964403   68713 out.go:235]   - Booting up control plane ...
	I0815 18:45:08.964526   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:45:08.964631   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:45:08.964736   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:45:08.964866   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:45:08.965043   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:45:08.965121   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:45:08.965225   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965418   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965508   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965703   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965766   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965919   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965981   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966140   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966200   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966381   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966389   68713 kubeadm.go:310] 
	I0815 18:45:08.966438   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:45:08.966473   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:45:08.966481   68713 kubeadm.go:310] 
	I0815 18:45:08.966533   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:45:08.966580   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:45:08.966711   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:45:08.966718   68713 kubeadm.go:310] 
	I0815 18:45:08.966844   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:45:08.966900   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:45:08.966948   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:45:08.966958   68713 kubeadm.go:310] 
	I0815 18:45:08.967082   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:45:08.967201   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:45:08.967214   68713 kubeadm.go:310] 
	I0815 18:45:08.967341   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:45:08.967450   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:45:08.967548   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:45:08.967646   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:45:08.967678   68713 kubeadm.go:310] 
	I0815 18:45:08.967716   68713 kubeadm.go:394] duration metric: took 7m56.388213745s to StartCluster
	I0815 18:45:08.967768   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:45:08.967834   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:45:09.013913   68713 cri.go:89] found id: ""
	I0815 18:45:09.013943   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.013954   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:45:09.013961   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:45:09.014030   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:45:09.051370   68713 cri.go:89] found id: ""
	I0815 18:45:09.051395   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.051403   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:45:09.051409   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:45:09.051477   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:45:09.086615   68713 cri.go:89] found id: ""
	I0815 18:45:09.086646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.086653   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:45:09.086659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:45:09.086708   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:45:09.122335   68713 cri.go:89] found id: ""
	I0815 18:45:09.122370   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.122381   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:45:09.122389   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:45:09.122453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:45:09.163207   68713 cri.go:89] found id: ""
	I0815 18:45:09.163232   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.163241   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:45:09.163247   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:45:09.163308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:45:09.199396   68713 cri.go:89] found id: ""
	I0815 18:45:09.199426   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.199437   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:45:09.199444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:45:09.199504   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:45:09.235073   68713 cri.go:89] found id: ""
	I0815 18:45:09.235101   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.235112   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:45:09.235120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:45:09.235180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:45:09.271614   68713 cri.go:89] found id: ""
	I0815 18:45:09.271646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.271659   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:45:09.271671   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:45:09.271686   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:45:09.372192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:45:09.372214   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:45:09.372231   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:45:09.496743   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:45:09.496780   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:45:09.540434   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:45:09.540471   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:45:09.595546   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:45:09.595584   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 18:45:09.609831   68713 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 18:45:09.609885   68713 out.go:270] * 
	* 
	W0815 18:45:09.609942   68713 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.609956   68713 out.go:270] * 
	* 
	W0815 18:45:09.610794   68713 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 18:45:09.614213   68713 out.go:201] 
	W0815 18:45:09.615379   68713 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.615420   68713 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 18:45:09.615437   68713 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 18:45:09.616840   68713 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-278865 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865: exit status 2 (227.990222ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-278865 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-278865 logs -n 25: (1.590317246s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-729203                           | kubernetes-upgrade-729203    | jenkins | v1.33.1 | 15 Aug 24 18:26 UTC | 15 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-498665                              | stopped-upgrade-498665       | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:27 UTC |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-729203                           | kubernetes-upgrade-729203    | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:27 UTC |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-003860                              | cert-expiration-003860       | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-003860                              | cert-expiration-003860       | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-698209 | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | disable-driver-mounts-698209                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:29 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-599042             | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-555028            | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-423062  | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-278865        | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-599042                  | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC | 15 Aug 24 18:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-555028                 | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-423062       | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-278865             | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:32:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:32:52.788403   68713 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:32:52.788704   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788715   68713 out.go:358] Setting ErrFile to fd 2...
	I0815 18:32:52.788719   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788916   68713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:32:52.789431   68713 out.go:352] Setting JSON to false
	I0815 18:32:52.790297   68713 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8119,"bootTime":1723738654,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:32:52.790355   68713 start.go:139] virtualization: kvm guest
	I0815 18:32:52.792478   68713 out.go:177] * [old-k8s-version-278865] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:32:52.793818   68713 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:32:52.793864   68713 notify.go:220] Checking for updates...
	I0815 18:32:52.796618   68713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:32:52.797914   68713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:32:52.799054   68713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:32:52.800337   68713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:32:52.801448   68713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:32:52.803087   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:32:52.803465   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.803521   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.819013   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0815 18:32:52.819447   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.819966   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.819985   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.820284   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.820482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.822582   68713 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 18:32:52.824024   68713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:32:52.824380   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.824425   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.839486   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0815 18:32:52.839905   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.840345   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.840367   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.840730   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.840904   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.876811   68713 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:32:52.878075   68713 start.go:297] selected driver: kvm2
	I0815 18:32:52.878098   68713 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.878240   68713 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:32:52.878920   68713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.879001   68713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:32:52.894158   68713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:32:52.894895   68713 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:32:52.894953   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:32:52.894969   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:32:52.895020   68713 start.go:340] cluster config:
	{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.895203   68713 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.897304   68713 out.go:177] * Starting "old-k8s-version-278865" primary control-plane node in "old-k8s-version-278865" cluster
	I0815 18:32:51.348753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:32:52.898737   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:32:52.898785   68713 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:32:52.898795   68713 cache.go:56] Caching tarball of preloaded images
	I0815 18:32:52.898861   68713 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:32:52.898871   68713 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0815 18:32:52.898962   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:32:52.899159   68713 start.go:360] acquireMachinesLock for old-k8s-version-278865: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:32:57.424754   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:00.496786   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:06.576768   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:09.648759   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:15.728760   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:18.800783   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:24.880725   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:27.952781   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:34.032763   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:37.104737   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:43.184796   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:46.260701   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:52.336771   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:55.408745   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:01.488742   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:04.560759   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:10.640771   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:13.712753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:19.792795   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:22.864720   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:28.944769   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:32.016745   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:38.096783   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:41.168739   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:47.248802   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:50.320778   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:56.400717   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:59.472780   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:05.552762   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:08.624707   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:14.704753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:17.776748   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:23.856782   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:26.932742   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:33.008795   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:36.080807   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:42.160767   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:45.232800   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:51.312780   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:54.384719   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:00.464740   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:03.536736   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:06.540805   68248 start.go:364] duration metric: took 4m1.610543673s to acquireMachinesLock for "embed-certs-555028"
	I0815 18:36:06.540869   68248 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:06.540881   68248 fix.go:54] fixHost starting: 
	I0815 18:36:06.541241   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:06.541272   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:06.556680   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0815 18:36:06.557105   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:06.557518   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:36:06.557540   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:06.557831   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:06.558059   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:06.558202   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:36:06.559702   68248 fix.go:112] recreateIfNeeded on embed-certs-555028: state=Stopped err=<nil>
	I0815 18:36:06.559724   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	W0815 18:36:06.559877   68248 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:06.561410   68248 out.go:177] * Restarting existing kvm2 VM for "embed-certs-555028" ...
	I0815 18:36:06.538474   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:06.538508   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:36:06.538805   67936 buildroot.go:166] provisioning hostname "no-preload-599042"
	I0815 18:36:06.538831   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:36:06.539016   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:36:06.540664   67936 machine.go:96] duration metric: took 4m37.431349663s to provisionDockerMachine
	I0815 18:36:06.540702   67936 fix.go:56] duration metric: took 4m37.452150687s for fixHost
	I0815 18:36:06.540709   67936 start.go:83] releasing machines lock for "no-preload-599042", held for 4m37.452172562s
	W0815 18:36:06.540732   67936 start.go:714] error starting host: provision: host is not running
	W0815 18:36:06.540801   67936 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0815 18:36:06.540809   67936 start.go:729] Will try again in 5 seconds ...
	I0815 18:36:06.562384   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Start
	I0815 18:36:06.562537   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring networks are active...
	I0815 18:36:06.563252   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring network default is active
	I0815 18:36:06.563554   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring network mk-embed-certs-555028 is active
	I0815 18:36:06.563908   68248 main.go:141] libmachine: (embed-certs-555028) Getting domain xml...
	I0815 18:36:06.564614   68248 main.go:141] libmachine: (embed-certs-555028) Creating domain...
	I0815 18:36:07.763793   68248 main.go:141] libmachine: (embed-certs-555028) Waiting to get IP...
	I0815 18:36:07.764733   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:07.765099   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:07.765200   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:07.765085   69393 retry.go:31] will retry after 206.840107ms: waiting for machine to come up
	I0815 18:36:07.973596   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:07.974069   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:07.974093   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:07.974019   69393 retry.go:31] will retry after 319.002956ms: waiting for machine to come up
	I0815 18:36:08.294670   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:08.295125   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:08.295154   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:08.295073   69393 retry.go:31] will retry after 425.99373ms: waiting for machine to come up
	I0815 18:36:08.722549   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:08.722954   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:08.722985   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:08.722903   69393 retry.go:31] will retry after 428.077891ms: waiting for machine to come up
	I0815 18:36:09.152674   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:09.153155   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:09.153187   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:09.153108   69393 retry.go:31] will retry after 476.041155ms: waiting for machine to come up
	I0815 18:36:09.630963   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:09.631456   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:09.631485   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:09.631395   69393 retry.go:31] will retry after 751.179582ms: waiting for machine to come up
	I0815 18:36:11.542364   67936 start.go:360] acquireMachinesLock for no-preload-599042: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:36:10.384466   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:10.384888   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:10.384916   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:10.384842   69393 retry.go:31] will retry after 1.028202731s: waiting for machine to come up
	I0815 18:36:11.414905   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:11.415343   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:11.415373   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:11.415283   69393 retry.go:31] will retry after 1.129105535s: waiting for machine to come up
	I0815 18:36:12.545941   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:12.546365   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:12.546387   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:12.546320   69393 retry.go:31] will retry after 1.734323812s: waiting for machine to come up
	I0815 18:36:14.283247   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:14.283622   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:14.283653   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:14.283569   69393 retry.go:31] will retry after 1.657173562s: waiting for machine to come up
	I0815 18:36:15.943329   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:15.943730   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:15.943760   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:15.943669   69393 retry.go:31] will retry after 2.349664822s: waiting for machine to come up
	I0815 18:36:18.295797   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:18.296330   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:18.296363   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:18.296264   69393 retry.go:31] will retry after 2.889119284s: waiting for machine to come up
	I0815 18:36:21.186597   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:21.186983   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:21.187004   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:21.186945   69393 retry.go:31] will retry after 2.79101595s: waiting for machine to come up
	I0815 18:36:23.981271   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.981732   68248 main.go:141] libmachine: (embed-certs-555028) Found IP for machine: 192.168.50.234
	I0815 18:36:23.981761   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has current primary IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.981770   68248 main.go:141] libmachine: (embed-certs-555028) Reserving static IP address...
	I0815 18:36:23.982166   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "embed-certs-555028", mac: "52:54:00:5c:59:7b", ip: "192.168.50.234"} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:23.982189   68248 main.go:141] libmachine: (embed-certs-555028) DBG | skip adding static IP to network mk-embed-certs-555028 - found existing host DHCP lease matching {name: "embed-certs-555028", mac: "52:54:00:5c:59:7b", ip: "192.168.50.234"}
	I0815 18:36:23.982200   68248 main.go:141] libmachine: (embed-certs-555028) Reserved static IP address: 192.168.50.234
	I0815 18:36:23.982210   68248 main.go:141] libmachine: (embed-certs-555028) Waiting for SSH to be available...
	I0815 18:36:23.982220   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Getting to WaitForSSH function...
	I0815 18:36:23.984253   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.984578   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:23.984601   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.984696   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Using SSH client type: external
	I0815 18:36:23.984720   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa (-rw-------)
	I0815 18:36:23.984752   68248 main.go:141] libmachine: (embed-certs-555028) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:36:23.984763   68248 main.go:141] libmachine: (embed-certs-555028) DBG | About to run SSH command:
	I0815 18:36:23.984772   68248 main.go:141] libmachine: (embed-certs-555028) DBG | exit 0
	I0815 18:36:24.104618   68248 main.go:141] libmachine: (embed-certs-555028) DBG | SSH cmd err, output: <nil>: 
	I0815 18:36:24.105023   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetConfigRaw
	I0815 18:36:24.105694   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:24.108191   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.108532   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.108568   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.108844   68248 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/config.json ...
	I0815 18:36:24.109037   68248 machine.go:93] provisionDockerMachine start ...
	I0815 18:36:24.109055   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:24.109249   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.111363   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.111680   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.111725   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.111821   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.111989   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.112141   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.112277   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.112454   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.112662   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.112673   68248 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:36:24.208951   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:36:24.208986   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.209255   68248 buildroot.go:166] provisioning hostname "embed-certs-555028"
	I0815 18:36:24.209285   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.209514   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.212393   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.212850   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.212878   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.213010   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.213198   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.213340   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.213466   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.213663   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.213821   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.213832   68248 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-555028 && echo "embed-certs-555028" | sudo tee /etc/hostname
	I0815 18:36:24.327157   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-555028
	
	I0815 18:36:24.327191   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.330193   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.330577   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.330607   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.330824   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.331029   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.331174   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.331325   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.331508   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.331713   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.331732   68248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-555028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-555028/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-555028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:36:24.437909   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:24.437938   68248 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:36:24.437977   68248 buildroot.go:174] setting up certificates
	I0815 18:36:24.437987   68248 provision.go:84] configureAuth start
	I0815 18:36:24.437996   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.438264   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:24.440637   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.440961   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.440993   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.441089   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.443071   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.443415   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.443448   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.443562   68248 provision.go:143] copyHostCerts
	I0815 18:36:24.443622   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:36:24.443643   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:36:24.443726   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:36:24.443843   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:36:24.443855   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:36:24.443893   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:36:24.443968   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:36:24.443977   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:36:24.444007   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:36:24.444074   68248 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.embed-certs-555028 san=[127.0.0.1 192.168.50.234 embed-certs-555028 localhost minikube]
	I0815 18:36:24.507119   68248 provision.go:177] copyRemoteCerts
	I0815 18:36:24.507177   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:36:24.507202   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.509835   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.510230   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.510260   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.510403   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.510606   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.510735   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.510842   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:24.590623   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:36:24.615635   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 18:36:24.643400   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:36:24.670394   68248 provision.go:87] duration metric: took 232.396705ms to configureAuth
	I0815 18:36:24.670415   68248 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:36:24.670609   68248 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:24.670694   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.673303   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.673685   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.673721   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.673863   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.674050   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.674222   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.674354   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.674513   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.674673   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.674688   68248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:36:25.149223   68429 start.go:364] duration metric: took 3m59.233021018s to acquireMachinesLock for "default-k8s-diff-port-423062"
	I0815 18:36:25.149295   68429 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:25.149306   68429 fix.go:54] fixHost starting: 
	I0815 18:36:25.149757   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:25.149799   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:25.166940   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41811
	I0815 18:36:25.167342   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:25.167882   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:25.167910   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:25.168179   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:25.168383   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:25.168553   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:25.170072   68429 fix.go:112] recreateIfNeeded on default-k8s-diff-port-423062: state=Stopped err=<nil>
	I0815 18:36:25.170106   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	W0815 18:36:25.170263   68429 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:25.172091   68429 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-423062" ...
	I0815 18:36:25.173641   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Start
	I0815 18:36:25.173831   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring networks are active...
	I0815 18:36:25.174594   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring network default is active
	I0815 18:36:25.174981   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring network mk-default-k8s-diff-port-423062 is active
	I0815 18:36:25.175410   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Getting domain xml...
	I0815 18:36:25.176275   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Creating domain...
	I0815 18:36:24.928110   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:36:24.928140   68248 machine.go:96] duration metric: took 819.089931ms to provisionDockerMachine
	I0815 18:36:24.928156   68248 start.go:293] postStartSetup for "embed-certs-555028" (driver="kvm2")
	I0815 18:36:24.928170   68248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:36:24.928190   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:24.928513   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:36:24.928542   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.931301   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.931756   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.931799   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.931846   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.932028   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.932311   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.932477   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.011373   68248 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:36:25.015677   68248 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:36:25.015707   68248 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:36:25.015798   68248 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:36:25.015900   68248 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:36:25.016014   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:36:25.025465   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:25.049662   68248 start.go:296] duration metric: took 121.491742ms for postStartSetup
	I0815 18:36:25.049704   68248 fix.go:56] duration metric: took 18.508823511s for fixHost
	I0815 18:36:25.049728   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.052184   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.052538   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.052583   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.052718   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.052904   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.053099   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.053271   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.053438   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:25.053604   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:25.053614   68248 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:36:25.149075   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723746985.122186042
	
	I0815 18:36:25.149095   68248 fix.go:216] guest clock: 1723746985.122186042
	I0815 18:36:25.149103   68248 fix.go:229] Guest: 2024-08-15 18:36:25.122186042 +0000 UTC Remote: 2024-08-15 18:36:25.049708543 +0000 UTC m=+260.258232753 (delta=72.477499ms)
	I0815 18:36:25.149131   68248 fix.go:200] guest clock delta is within tolerance: 72.477499ms
	I0815 18:36:25.149135   68248 start.go:83] releasing machines lock for "embed-certs-555028", held for 18.608287436s
	I0815 18:36:25.149158   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.149408   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:25.152125   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.152542   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.152568   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.152742   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153260   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153439   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153539   68248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:36:25.153587   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.153639   68248 ssh_runner.go:195] Run: cat /version.json
	I0815 18:36:25.153659   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.156311   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156504   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156740   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.156769   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156847   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.156883   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.157040   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.157122   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.157303   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.157318   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.157473   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.157479   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.157647   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.157647   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.233725   68248 ssh_runner.go:195] Run: systemctl --version
	I0815 18:36:25.253737   68248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:36:25.402047   68248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:36:25.409250   68248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:36:25.409328   68248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:36:25.426491   68248 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:36:25.426514   68248 start.go:495] detecting cgroup driver to use...
	I0815 18:36:25.426580   68248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:36:25.445177   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:36:25.459432   68248 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:36:25.459512   68248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:36:25.473777   68248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:36:25.488144   68248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:36:25.627700   68248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:36:25.791278   68248 docker.go:233] disabling docker service ...
	I0815 18:36:25.791349   68248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:36:25.810146   68248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:36:25.825131   68248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:36:25.975457   68248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:36:26.106757   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:36:26.123053   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:36:26.142739   68248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:36:26.142804   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.153163   68248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:36:26.153217   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.163863   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.175028   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.192480   68248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:36:26.208933   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.219825   68248 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.245623   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.256645   68248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:36:26.265947   68248 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:36:26.266004   68248 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:36:26.278665   68248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:36:26.289519   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:26.423656   68248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:36:26.560919   68248 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:36:26.560996   68248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:36:26.565696   68248 start.go:563] Will wait 60s for crictl version
	I0815 18:36:26.565764   68248 ssh_runner.go:195] Run: which crictl
	I0815 18:36:26.569498   68248 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:36:26.609872   68248 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:36:26.609948   68248 ssh_runner.go:195] Run: crio --version
	I0815 18:36:26.645300   68248 ssh_runner.go:195] Run: crio --version
	I0815 18:36:26.681229   68248 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:36:26.682461   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:26.685495   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:26.686011   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:26.686037   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:26.686323   68248 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 18:36:26.690590   68248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:26.703512   68248 kubeadm.go:883] updating cluster {Name:embed-certs-555028 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:36:26.703679   68248 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:36:26.703748   68248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:26.740601   68248 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:36:26.740679   68248 ssh_runner.go:195] Run: which lz4
	I0815 18:36:26.744798   68248 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:36:26.748894   68248 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:36:26.748921   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:36:28.188174   68248 crio.go:462] duration metric: took 1.443420751s to copy over tarball
	I0815 18:36:28.188254   68248 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:36:26.428013   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting to get IP...
	I0815 18:36:26.428929   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.429397   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.429513   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.429391   69513 retry.go:31] will retry after 296.45967ms: waiting for machine to come up
	I0815 18:36:26.727871   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.728273   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.728298   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.728237   69513 retry.go:31] will retry after 258.379179ms: waiting for machine to come up
	I0815 18:36:26.988915   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.989398   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.989472   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.989374   69513 retry.go:31] will retry after 418.611169ms: waiting for machine to come up
	I0815 18:36:27.409905   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.410358   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.410398   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:27.410327   69513 retry.go:31] will retry after 566.642237ms: waiting for machine to come up
	I0815 18:36:27.978717   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.979183   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.979215   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:27.979125   69513 retry.go:31] will retry after 740.292473ms: waiting for machine to come up
	I0815 18:36:28.720587   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:28.720970   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:28.721008   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:28.720941   69513 retry.go:31] will retry after 610.435484ms: waiting for machine to come up
	I0815 18:36:29.333342   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:29.333696   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:29.333731   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:29.333632   69513 retry.go:31] will retry after 1.059086771s: waiting for machine to come up
	I0815 18:36:30.394125   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:30.394560   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:30.394589   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:30.394519   69513 retry.go:31] will retry after 1.279753887s: waiting for machine to come up
	I0815 18:36:30.309340   68248 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.121056035s)
	I0815 18:36:30.309382   68248 crio.go:469] duration metric: took 2.121176349s to extract the tarball
	I0815 18:36:30.309394   68248 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:36:30.346520   68248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:30.394771   68248 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:36:30.394789   68248 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:36:30.394799   68248 kubeadm.go:934] updating node { 192.168.50.234 8443 v1.31.0 crio true true} ...
	I0815 18:36:30.394951   68248 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-555028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:36:30.395033   68248 ssh_runner.go:195] Run: crio config
	I0815 18:36:30.439636   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:36:30.439663   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:30.439678   68248 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:36:30.439707   68248 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.234 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-555028 NodeName:embed-certs-555028 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:36:30.439899   68248 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-555028"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:36:30.439976   68248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:36:30.449774   68248 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:36:30.449842   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:36:30.458892   68248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0815 18:36:30.475171   68248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:36:30.490942   68248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0815 18:36:30.507498   68248 ssh_runner.go:195] Run: grep 192.168.50.234	control-plane.minikube.internal$ /etc/hosts
	I0815 18:36:30.511254   68248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:30.522772   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:30.646060   68248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:30.667948   68248 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028 for IP: 192.168.50.234
	I0815 18:36:30.667974   68248 certs.go:194] generating shared ca certs ...
	I0815 18:36:30.667994   68248 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:30.668178   68248 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:36:30.668231   68248 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:36:30.668244   68248 certs.go:256] generating profile certs ...
	I0815 18:36:30.668360   68248 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/client.key
	I0815 18:36:30.668442   68248 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.key.539203f3
	I0815 18:36:30.668524   68248 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.key
	I0815 18:36:30.668686   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:36:30.668725   68248 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:36:30.668737   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:36:30.668774   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:36:30.668807   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:36:30.668836   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:36:30.668941   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:30.669810   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:36:30.721245   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:36:30.753016   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:36:30.782005   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:36:30.815008   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 18:36:30.847615   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:36:30.871566   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:36:30.894778   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:36:30.919167   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:36:30.942597   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:36:30.965395   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:36:30.988959   68248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:36:31.005578   68248 ssh_runner.go:195] Run: openssl version
	I0815 18:36:31.011697   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:36:31.022496   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.027102   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.027154   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.033475   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:36:31.044793   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:36:31.055793   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.060642   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.060692   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.066544   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:36:31.077637   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:36:31.088468   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.093295   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.093347   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.098908   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:36:31.109856   68248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:36:31.114519   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:36:31.120709   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:36:31.126754   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:36:31.132917   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:36:31.138739   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:36:31.144785   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:36:31.150604   68248 kubeadm.go:392] StartCluster: {Name:embed-certs-555028 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:36:31.150702   68248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:36:31.150755   68248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:31.192152   68248 cri.go:89] found id: ""
	I0815 18:36:31.192253   68248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:36:31.203076   68248 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:36:31.203099   68248 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:36:31.203151   68248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:36:31.213659   68248 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:36:31.215070   68248 kubeconfig.go:125] found "embed-certs-555028" server: "https://192.168.50.234:8443"
	I0815 18:36:31.218243   68248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:36:31.228210   68248 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.234
	I0815 18:36:31.228245   68248 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:36:31.228267   68248 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:36:31.228317   68248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:31.275944   68248 cri.go:89] found id: ""
	I0815 18:36:31.276031   68248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:36:31.294466   68248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:36:31.307241   68248 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:36:31.307276   68248 kubeadm.go:157] found existing configuration files:
	
	I0815 18:36:31.307327   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:36:31.316654   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:36:31.316722   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:36:31.326475   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:36:31.335726   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:36:31.335796   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:36:31.345063   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:36:31.353576   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:36:31.353628   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:36:31.362449   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:36:31.370717   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:36:31.370792   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:36:31.379827   68248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:36:31.389001   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:31.510611   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.158537   68248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.647891555s)
	I0815 18:36:33.158574   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.376600   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.459742   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.545503   68248 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:36:33.545595   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.046191   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.546256   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.571236   68248 api_server.go:72] duration metric: took 1.025744612s to wait for apiserver process to appear ...
	I0815 18:36:34.571275   68248 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:36:34.571297   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:31.675513   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:31.676013   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:31.676042   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:31.675960   69513 retry.go:31] will retry after 1.669099573s: waiting for machine to come up
	I0815 18:36:33.348089   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:33.348611   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:33.348639   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:33.348575   69513 retry.go:31] will retry after 1.613394267s: waiting for machine to come up
	I0815 18:36:34.963674   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:34.964187   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:34.964215   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:34.964146   69513 retry.go:31] will retry after 2.128578928s: waiting for machine to come up
	I0815 18:36:37.262138   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:37.262168   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:37.262184   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:37.310539   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:37.310569   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:37.571713   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:37.590002   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:37.590062   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:38.071526   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:38.076179   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:38.076212   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:38.571714   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:38.576518   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I0815 18:36:38.582358   68248 api_server.go:141] control plane version: v1.31.0
	I0815 18:36:38.582381   68248 api_server.go:131] duration metric: took 4.011097638s to wait for apiserver health ...
	I0815 18:36:38.582393   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:36:38.582401   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:38.584203   68248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:36:38.585513   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:36:38.604350   68248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:36:38.645538   68248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:36:38.653445   68248 system_pods.go:59] 8 kube-system pods found
	I0815 18:36:38.653476   68248 system_pods.go:61] "coredns-6f6b679f8f-sjx7c" [93a037b9-1e7c-471a-b62d-d7898b2b8287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:36:38.653486   68248 system_pods.go:61] "etcd-embed-certs-555028" [7e526b10-7acd-4d25-9847-8e11e21ba8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:36:38.653495   68248 system_pods.go:61] "kube-apiserver-embed-certs-555028" [3f317b0f-15a1-4e7d-8ca5-3cdf70dee711] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:36:38.653501   68248 system_pods.go:61] "kube-controller-manager-embed-certs-555028" [431113cd-bce9-4ecb-8233-c5463875f1b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:36:38.653506   68248 system_pods.go:61] "kube-proxy-dzwt7" [a8101c7e-c010-45a3-8746-0dc20c7ef0e2] Running
	I0815 18:36:38.653513   68248 system_pods.go:61] "kube-scheduler-embed-certs-555028" [84a5d051-d8c1-4097-b92c-e2f0d7a03385] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:36:38.653520   68248 system_pods.go:61] "metrics-server-6867b74b74-wp5rn" [222160bf-6774-49a5-9f30-7582748c8a82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:36:38.653534   68248 system_pods.go:61] "storage-provisioner" [e88c8785-2d8b-47b6-850f-e6cda74a4f5a] Running
	I0815 18:36:38.653549   68248 system_pods.go:74] duration metric: took 7.98765ms to wait for pod list to return data ...
	I0815 18:36:38.653558   68248 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:36:38.656864   68248 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:36:38.656893   68248 node_conditions.go:123] node cpu capacity is 2
	I0815 18:36:38.656906   68248 node_conditions.go:105] duration metric: took 3.340245ms to run NodePressure ...
	I0815 18:36:38.656923   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:38.918518   68248 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:36:38.923148   68248 kubeadm.go:739] kubelet initialised
	I0815 18:36:38.923168   68248 kubeadm.go:740] duration metric: took 4.62305ms waiting for restarted kubelet to initialise ...
	I0815 18:36:38.923177   68248 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:38.927933   68248 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.934928   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.934953   68248 pod_ready.go:82] duration metric: took 6.994953ms for pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.934965   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.934974   68248 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.939533   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "etcd-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.939558   68248 pod_ready.go:82] duration metric: took 4.573835ms for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.939568   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "etcd-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.939575   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.943567   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.943590   68248 pod_ready.go:82] duration metric: took 4.004869ms for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.943601   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.943608   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.049176   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:39.049203   68248 pod_ready.go:82] duration metric: took 105.585473ms for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:39.049212   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:39.049219   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dzwt7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.449514   68248 pod_ready.go:93] pod "kube-proxy-dzwt7" in "kube-system" namespace has status "Ready":"True"
	I0815 18:36:39.449539   68248 pod_ready.go:82] duration metric: took 400.311062ms for pod "kube-proxy-dzwt7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.449548   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:37.094139   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:37.094640   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:37.094670   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:37.094583   69513 retry.go:31] will retry after 2.268267509s: waiting for machine to come up
	I0815 18:36:39.365595   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:39.365975   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:39.366007   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:39.365938   69513 retry.go:31] will retry after 3.286154075s: waiting for machine to come up
	I0815 18:36:44.301710   68713 start.go:364] duration metric: took 3m51.402501772s to acquireMachinesLock for "old-k8s-version-278865"
	I0815 18:36:44.301771   68713 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:44.301792   68713 fix.go:54] fixHost starting: 
	I0815 18:36:44.302227   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:44.302265   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:44.319819   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I0815 18:36:44.320335   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:44.320975   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:36:44.321003   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:44.321380   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:44.321572   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:36:44.321720   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetState
	I0815 18:36:44.323551   68713 fix.go:112] recreateIfNeeded on old-k8s-version-278865: state=Stopped err=<nil>
	I0815 18:36:44.323586   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	W0815 18:36:44.323748   68713 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:44.325761   68713 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-278865" ...
	I0815 18:36:41.456648   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:43.456917   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:42.653801   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.654221   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has current primary IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.654251   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Found IP for machine: 192.168.61.7
	I0815 18:36:42.654268   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Reserving static IP address...
	I0815 18:36:42.654714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-423062", mac: "52:54:00:83:9a:f2", ip: "192.168.61.7"} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.654759   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | skip adding static IP to network mk-default-k8s-diff-port-423062 - found existing host DHCP lease matching {name: "default-k8s-diff-port-423062", mac: "52:54:00:83:9a:f2", ip: "192.168.61.7"}
	I0815 18:36:42.654778   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Reserved static IP address: 192.168.61.7
	I0815 18:36:42.654798   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for SSH to be available...
	I0815 18:36:42.654815   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Getting to WaitForSSH function...
	I0815 18:36:42.657618   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.657968   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.657996   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.658093   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Using SSH client type: external
	I0815 18:36:42.658115   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa (-rw-------)
	I0815 18:36:42.658200   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:36:42.658223   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | About to run SSH command:
	I0815 18:36:42.658234   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | exit 0
	I0815 18:36:42.780714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | SSH cmd err, output: <nil>: 
	I0815 18:36:42.781095   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetConfigRaw
	I0815 18:36:42.781734   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:42.784384   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.784820   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.784853   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.785137   68429 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/config.json ...
	I0815 18:36:42.785364   68429 machine.go:93] provisionDockerMachine start ...
	I0815 18:36:42.785390   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:42.785599   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:42.788083   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.788436   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.788465   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.788655   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:42.788833   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.789006   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.789145   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:42.789301   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:42.789607   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:42.789625   68429 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:36:42.889002   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:36:42.889031   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:42.889237   68429 buildroot.go:166] provisioning hostname "default-k8s-diff-port-423062"
	I0815 18:36:42.889260   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:42.889434   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:42.892072   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.892422   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.892445   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.892645   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:42.892846   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.892995   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.893148   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:42.893286   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:42.893490   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:42.893505   68429 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-423062 && echo "default-k8s-diff-port-423062" | sudo tee /etc/hostname
	I0815 18:36:43.008310   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-423062
	
	I0815 18:36:43.008347   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.011091   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.011446   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.011472   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.011653   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.011864   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.012027   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.012159   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.012321   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:43.012518   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:43.012537   68429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-423062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-423062/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-423062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:36:43.121095   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:43.121123   68429 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:36:43.121157   68429 buildroot.go:174] setting up certificates
	I0815 18:36:43.121172   68429 provision.go:84] configureAuth start
	I0815 18:36:43.121186   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:43.121510   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:43.123863   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.124178   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.124200   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.124312   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.126385   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.126633   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.126667   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.126784   68429 provision.go:143] copyHostCerts
	I0815 18:36:43.126861   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:36:43.126884   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:36:43.126944   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:36:43.127052   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:36:43.127062   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:36:43.127090   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:36:43.127177   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:36:43.127187   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:36:43.127215   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:36:43.127286   68429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-423062 san=[127.0.0.1 192.168.61.7 default-k8s-diff-port-423062 localhost minikube]
	I0815 18:36:43.627396   68429 provision.go:177] copyRemoteCerts
	I0815 18:36:43.627460   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:36:43.627485   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.630025   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.630311   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.630340   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.630479   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.630670   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.630850   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.630976   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:43.712571   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:36:43.738904   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0815 18:36:43.764328   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 18:36:43.787211   68429 provision.go:87] duration metric: took 666.026026ms to configureAuth
	I0815 18:36:43.787241   68429 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:36:43.787467   68429 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:43.787567   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.789803   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.790210   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.790232   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.790432   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.790604   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.790729   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.790905   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.791021   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:43.791169   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:43.791187   68429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:36:44.067277   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:36:44.067307   68429 machine.go:96] duration metric: took 1.281926749s to provisionDockerMachine
	I0815 18:36:44.067322   68429 start.go:293] postStartSetup for "default-k8s-diff-port-423062" (driver="kvm2")
	I0815 18:36:44.067335   68429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:36:44.067360   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.067711   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:36:44.067749   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.070224   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.070543   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.070573   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.070740   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.070925   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.071079   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.071269   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.152784   68429 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:36:44.157264   68429 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:36:44.157291   68429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:36:44.157364   68429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:36:44.157461   68429 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:36:44.157580   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:36:44.168520   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:44.195223   68429 start.go:296] duration metric: took 127.886016ms for postStartSetup
	I0815 18:36:44.195268   68429 fix.go:56] duration metric: took 19.045962302s for fixHost
	I0815 18:36:44.195292   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.197711   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.198065   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.198090   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.198281   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.198438   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.198614   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.198768   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.198959   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:44.199154   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:44.199172   68429 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:36:44.301519   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747004.273982003
	
	I0815 18:36:44.301543   68429 fix.go:216] guest clock: 1723747004.273982003
	I0815 18:36:44.301553   68429 fix.go:229] Guest: 2024-08-15 18:36:44.273982003 +0000 UTC Remote: 2024-08-15 18:36:44.195273929 +0000 UTC m=+258.412094909 (delta=78.708074ms)
	I0815 18:36:44.301598   68429 fix.go:200] guest clock delta is within tolerance: 78.708074ms
	I0815 18:36:44.301606   68429 start.go:83] releasing machines lock for "default-k8s-diff-port-423062", held for 19.152336719s
	I0815 18:36:44.301638   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.301903   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:44.305012   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.305498   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.305524   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.305742   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306240   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306425   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306533   68429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:36:44.306595   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.306689   68429 ssh_runner.go:195] Run: cat /version.json
	I0815 18:36:44.306714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.309649   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.309838   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310098   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.310133   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310250   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.310267   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.310296   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310434   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.310457   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.310634   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.310654   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.310794   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.310798   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.310947   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.412125   68429 ssh_runner.go:195] Run: systemctl --version
	I0815 18:36:44.420070   68429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:36:44.566014   68429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:36:44.572209   68429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:36:44.572283   68429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:36:44.593041   68429 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:36:44.593067   68429 start.go:495] detecting cgroup driver to use...
	I0815 18:36:44.593145   68429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:36:44.613683   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:36:44.627766   68429 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:36:44.627851   68429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:36:44.641172   68429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:36:44.654952   68429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:36:44.778684   68429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:36:44.965548   68429 docker.go:233] disabling docker service ...
	I0815 18:36:44.965631   68429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:36:44.983153   68429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:36:44.999109   68429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:36:45.131097   68429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:36:45.270930   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:36:45.287846   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:36:45.309345   68429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:36:45.309407   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.320032   68429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:36:45.320092   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.331647   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.342499   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.353192   68429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:36:45.364163   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.381124   68429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.403692   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.415032   68429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:36:45.424798   68429 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:36:45.424859   68429 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:36:45.439077   68429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:36:45.448554   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:45.570697   68429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:36:45.719575   68429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:36:45.719655   68429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:36:45.724415   68429 start.go:563] Will wait 60s for crictl version
	I0815 18:36:45.724476   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:36:45.728443   68429 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:36:45.770935   68429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:36:45.771023   68429 ssh_runner.go:195] Run: crio --version
	I0815 18:36:45.799588   68429 ssh_runner.go:195] Run: crio --version
	I0815 18:36:45.830915   68429 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:36:44.327259   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .Start
	I0815 18:36:44.327431   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring networks are active...
	I0815 18:36:44.328116   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network default is active
	I0815 18:36:44.328601   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network mk-old-k8s-version-278865 is active
	I0815 18:36:44.329081   68713 main.go:141] libmachine: (old-k8s-version-278865) Getting domain xml...
	I0815 18:36:44.331888   68713 main.go:141] libmachine: (old-k8s-version-278865) Creating domain...
	I0815 18:36:45.633882   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting to get IP...
	I0815 18:36:45.634842   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.635216   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.635286   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.635206   69670 retry.go:31] will retry after 300.377534ms: waiting for machine to come up
	I0815 18:36:45.937793   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.938290   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.938312   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.938236   69670 retry.go:31] will retry after 282.311084ms: waiting for machine to come up
	I0815 18:36:46.222856   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.223327   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.223350   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.223283   69670 retry.go:31] will retry after 354.299649ms: waiting for machine to come up
	I0815 18:36:46.578770   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.579337   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.579360   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.579241   69670 retry.go:31] will retry after 382.947645ms: waiting for machine to come up
	I0815 18:36:46.964003   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.964911   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.964943   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.964824   69670 retry.go:31] will retry after 710.757442ms: waiting for machine to come up
	I0815 18:36:47.676738   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:47.677422   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:47.677450   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:47.677360   69670 retry.go:31] will retry after 588.944709ms: waiting for machine to come up
	I0815 18:36:45.957776   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:48.456345   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:45.832411   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:45.835145   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:45.835523   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:45.835553   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:45.835762   68429 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 18:36:45.840347   68429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:45.854348   68429 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-423062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:36:45.854471   68429 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:36:45.854527   68429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:45.899238   68429 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:36:45.899320   68429 ssh_runner.go:195] Run: which lz4
	I0815 18:36:45.903367   68429 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:36:45.907499   68429 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:36:45.907526   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:36:47.317850   68429 crio.go:462] duration metric: took 1.414524229s to copy over tarball
	I0815 18:36:47.317929   68429 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:36:49.443172   68429 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.125212316s)
	I0815 18:36:49.443206   68429 crio.go:469] duration metric: took 2.125324606s to extract the tarball
	I0815 18:36:49.443215   68429 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:36:49.483693   68429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:49.535588   68429 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:36:49.535617   68429 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:36:49.535627   68429 kubeadm.go:934] updating node { 192.168.61.7 8444 v1.31.0 crio true true} ...
	I0815 18:36:49.535753   68429 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-423062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:36:49.535843   68429 ssh_runner.go:195] Run: crio config
	I0815 18:36:49.587186   68429 cni.go:84] Creating CNI manager for ""
	I0815 18:36:49.587215   68429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:49.587232   68429 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:36:49.587257   68429 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.7 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-423062 NodeName:default-k8s-diff-port-423062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:36:49.587447   68429 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-423062"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:36:49.587520   68429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:36:49.598312   68429 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:36:49.598376   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:36:49.608382   68429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0815 18:36:49.624449   68429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:36:49.647224   68429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0815 18:36:49.664848   68429 ssh_runner.go:195] Run: grep 192.168.61.7	control-plane.minikube.internal$ /etc/hosts
	I0815 18:36:49.668582   68429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:49.680786   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:49.804940   68429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:49.826104   68429 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062 for IP: 192.168.61.7
	I0815 18:36:49.826130   68429 certs.go:194] generating shared ca certs ...
	I0815 18:36:49.826147   68429 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:49.826281   68429 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:36:49.826322   68429 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:36:49.826331   68429 certs.go:256] generating profile certs ...
	I0815 18:36:49.826403   68429 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.key
	I0815 18:36:49.826461   68429 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.key.534debab
	I0815 18:36:49.826528   68429 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.key
	I0815 18:36:49.826667   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:36:49.826713   68429 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:36:49.826725   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:36:49.826748   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:36:49.826777   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:36:49.826810   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:36:49.826868   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:49.827597   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:36:49.855678   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:36:49.891292   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:36:49.928612   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:36:49.961506   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 18:36:49.993955   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:36:50.019275   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:36:50.046773   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:36:50.074201   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:36:50.101491   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:36:50.125378   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:36:50.149974   68429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:36:50.166393   68429 ssh_runner.go:195] Run: openssl version
	I0815 18:36:50.172182   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:36:50.182755   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.187110   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.187155   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.192956   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:36:50.203680   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:36:50.214269   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.218876   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.218925   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.224463   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:36:50.234811   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:36:50.245585   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.250397   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.250446   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.256189   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:36:50.267342   68429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:36:50.272011   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:36:50.278217   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:36:50.284300   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:36:50.290402   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:36:50.296174   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:36:50.301957   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:36:50.307807   68429 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-423062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:36:50.307910   68429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:36:50.307973   68429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:50.359833   68429 cri.go:89] found id: ""
	I0815 18:36:50.359923   68429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:36:50.370306   68429 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:36:50.370324   68429 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:36:50.370379   68429 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:36:50.379585   68429 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:36:50.380510   68429 kubeconfig.go:125] found "default-k8s-diff-port-423062" server: "https://192.168.61.7:8444"
	I0815 18:36:50.384136   68429 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:36:50.393393   68429 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.7
	I0815 18:36:50.393428   68429 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:36:50.393441   68429 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:36:50.393494   68429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:50.428085   68429 cri.go:89] found id: ""
	I0815 18:36:50.428162   68429 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:36:50.444032   68429 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:36:50.454927   68429 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:36:50.454948   68429 kubeadm.go:157] found existing configuration files:
	
	I0815 18:36:50.455000   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0815 18:36:50.464733   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:36:50.464797   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:36:50.473973   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0815 18:36:50.482861   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:36:50.482910   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:36:50.492213   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0815 18:36:50.501173   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:36:50.501230   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:36:50.510299   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0815 18:36:50.519262   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:36:50.519308   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:36:50.528632   68429 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:36:50.537914   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:50.655230   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:48.268221   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:48.268790   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:48.268814   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:48.268736   69670 retry.go:31] will retry after 781.489196ms: waiting for machine to come up
	I0815 18:36:49.051824   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:49.052246   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:49.052277   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:49.052182   69670 retry.go:31] will retry after 1.393037007s: waiting for machine to come up
	I0815 18:36:50.446428   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:50.446860   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:50.446892   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:50.446800   69670 retry.go:31] will retry after 1.826779004s: waiting for machine to come up
	I0815 18:36:52.275716   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:52.276208   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:52.276231   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:52.276167   69670 retry.go:31] will retry after 1.746726312s: waiting for machine to come up
	I0815 18:36:50.458388   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:52.147996   68248 pod_ready.go:93] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:36:52.148026   68248 pod_ready.go:82] duration metric: took 12.698470185s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:52.148039   68248 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:54.153927   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:51.670903   68429 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.015612511s)
	I0815 18:36:51.670943   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:51.985806   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:52.069082   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:52.189200   68429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:36:52.189298   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:52.689767   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:53.189633   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:53.205099   68429 api_server.go:72] duration metric: took 1.015908263s to wait for apiserver process to appear ...
	I0815 18:36:53.205136   68429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:36:53.205162   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:53.205695   68429 api_server.go:269] stopped: https://192.168.61.7:8444/healthz: Get "https://192.168.61.7:8444/healthz": dial tcp 192.168.61.7:8444: connect: connection refused
	I0815 18:36:53.705285   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:55.721139   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:55.721177   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:55.721193   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:55.750790   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:55.750825   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:56.205675   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:56.212464   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:56.212509   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:56.705700   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:56.716232   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:56.716277   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:57.205663   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:57.211081   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0815 18:36:57.217736   68429 api_server.go:141] control plane version: v1.31.0
	I0815 18:36:57.217763   68429 api_server.go:131] duration metric: took 4.012620084s to wait for apiserver health ...
	I0815 18:36:57.217772   68429 cni.go:84] Creating CNI manager for ""
	I0815 18:36:57.217778   68429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:57.219455   68429 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:36:54.025067   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:54.025508   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:54.025535   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:54.025462   69670 retry.go:31] will retry after 2.693215306s: waiting for machine to come up
	I0815 18:36:56.721740   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:56.722139   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:56.722178   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:56.722070   69670 retry.go:31] will retry after 3.370623363s: waiting for machine to come up
	I0815 18:36:57.220672   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:36:57.241710   68429 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:36:57.262714   68429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:36:57.272766   68429 system_pods.go:59] 8 kube-system pods found
	I0815 18:36:57.272822   68429 system_pods.go:61] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:36:57.272836   68429 system_pods.go:61] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:36:57.272849   68429 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:36:57.272862   68429 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:36:57.272872   68429 system_pods.go:61] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:36:57.272887   68429 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:36:57.272896   68429 system_pods.go:61] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:36:57.272902   68429 system_pods.go:61] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:36:57.272913   68429 system_pods.go:74] duration metric: took 10.175415ms to wait for pod list to return data ...
	I0815 18:36:57.272924   68429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:36:57.276880   68429 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:36:57.276915   68429 node_conditions.go:123] node cpu capacity is 2
	I0815 18:36:57.276929   68429 node_conditions.go:105] duration metric: took 3.998879ms to run NodePressure ...
	I0815 18:36:57.276951   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:57.554251   68429 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:36:57.558062   68429 kubeadm.go:739] kubelet initialised
	I0815 18:36:57.558084   68429 kubeadm.go:740] duration metric: took 3.811943ms waiting for restarted kubelet to initialise ...
	I0815 18:36:57.558091   68429 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:57.562470   68429 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.567212   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.567232   68429 pod_ready.go:82] duration metric: took 4.742538ms for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.567240   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.567245   68429 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.571217   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.571237   68429 pod_ready.go:82] duration metric: took 3.984908ms for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.571247   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.571255   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.575456   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.575494   68429 pod_ready.go:82] duration metric: took 4.232215ms for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.575507   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.575515   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.665876   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.665902   68429 pod_ready.go:82] duration metric: took 90.37918ms for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.665914   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.665921   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.066377   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-proxy-bnxv7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.066402   68429 pod_ready.go:82] duration metric: took 400.475025ms for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.066411   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-proxy-bnxv7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.066426   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.465739   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.465767   68429 pod_ready.go:82] duration metric: took 399.331024ms for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.465779   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.465787   68429 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.866772   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.866798   68429 pod_ready.go:82] duration metric: took 401.001046ms for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.866809   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.866817   68429 pod_ready.go:39] duration metric: took 1.308717049s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:58.866835   68429 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:36:58.878274   68429 ops.go:34] apiserver oom_adj: -16
	I0815 18:36:58.878298   68429 kubeadm.go:597] duration metric: took 8.507965813s to restartPrimaryControlPlane
	I0815 18:36:58.878308   68429 kubeadm.go:394] duration metric: took 8.570508558s to StartCluster
	I0815 18:36:58.878327   68429 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:58.878499   68429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:36:58.879927   68429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:58.880213   68429 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:36:58.880262   68429 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:36:58.880339   68429 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-423062"
	I0815 18:36:58.880375   68429 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-423062"
	I0815 18:36:58.880374   68429 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-423062"
	W0815 18:36:58.880383   68429 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:36:58.880367   68429 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-423062"
	I0815 18:36:58.880403   68429 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-423062"
	W0815 18:36:58.880410   68429 addons.go:243] addon metrics-server should already be in state true
	I0815 18:36:58.880414   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.880422   68429 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:58.880428   68429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-423062"
	I0815 18:36:58.880434   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.880772   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880778   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880801   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.880820   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.880826   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880855   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.882047   68429 out.go:177] * Verifying Kubernetes components...
	I0815 18:36:58.883440   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:58.895575   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I0815 18:36:58.895577   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37567
	I0815 18:36:58.895739   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39491
	I0815 18:36:58.896031   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896063   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896121   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896511   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896529   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896612   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896631   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896749   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896768   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896917   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.896963   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.897099   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.897132   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.897483   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.897527   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.897535   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.897558   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.900773   68429 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-423062"
	W0815 18:36:58.900796   68429 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:36:58.900825   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.901206   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.901238   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.912877   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0815 18:36:58.912903   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37245
	I0815 18:36:58.913271   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.913344   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.913835   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.913845   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.913852   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.913862   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.914177   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.914218   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.914361   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.914408   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.916165   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.916601   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.918553   68429 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:36:58.918560   68429 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:36:56.154697   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:58.654414   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:58.919539   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44177
	I0815 18:36:58.919773   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:36:58.919790   68429 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:36:58.919809   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.919884   68429 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:36:58.919900   68429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:36:58.919916   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.919945   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.920330   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.920343   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.920777   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.921363   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.921401   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.923262   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.923629   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.923656   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.923684   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.924108   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.924256   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.924319   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.924337   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.924501   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.924564   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.924688   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:58.924773   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.924944   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.925266   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:58.938064   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38697
	I0815 18:36:58.938411   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.938762   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.938782   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.939057   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.939214   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.941134   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.941395   68429 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:36:58.941414   68429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:36:58.941436   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.943936   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.944331   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.944355   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.944594   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.944765   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.944900   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.944977   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:59.069466   68429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:59.090259   68429 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-423062" to be "Ready" ...
	I0815 18:36:59.203591   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:36:59.232676   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:36:59.232705   68429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:36:59.273079   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:36:59.287625   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:36:59.287653   68429 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:36:59.359798   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:36:59.359821   68429 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:36:59.406350   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:00.373429   68429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16980511s)
	I0815 18:37:00.373477   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373495   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373501   68429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.10037967s)
	I0815 18:37:00.373546   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373563   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373787   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.373805   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.373848   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.373852   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.373863   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.373866   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.373890   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373903   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373879   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373937   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.374313   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.374322   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.374326   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.374344   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.374355   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.379434   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.379450   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.379666   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.379679   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.389853   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.389872   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.390148   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.390152   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.390173   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.390181   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.390189   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.390396   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.390447   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.390461   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.390475   68429 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-423062"
	I0815 18:37:00.392530   68429 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 18:37:00.393703   68429 addons.go:510] duration metric: took 1.51344438s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 18:37:00.093896   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:00.094391   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:37:00.094453   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:37:00.094333   69670 retry.go:31] will retry after 2.855023319s: waiting for machine to come up
	I0815 18:37:04.297557   67936 start.go:364] duration metric: took 52.755115386s to acquireMachinesLock for "no-preload-599042"
	I0815 18:37:04.297614   67936 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:37:04.297639   67936 fix.go:54] fixHost starting: 
	I0815 18:37:04.298066   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:04.298096   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:04.317897   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42493
	I0815 18:37:04.318309   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:04.318797   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:04.318822   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:04.319191   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:04.319388   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:04.319543   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:04.320970   67936 fix.go:112] recreateIfNeeded on no-preload-599042: state=Stopped err=<nil>
	I0815 18:37:04.320994   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	W0815 18:37:04.321164   67936 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:37:04.322689   67936 out.go:177] * Restarting existing kvm2 VM for "no-preload-599042" ...
	I0815 18:37:00.654833   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:03.154235   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:02.950449   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950903   68713 main.go:141] libmachine: (old-k8s-version-278865) Found IP for machine: 192.168.39.89
	I0815 18:37:02.950931   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has current primary IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950941   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserving static IP address...
	I0815 18:37:02.951319   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.951356   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | skip adding static IP to network mk-old-k8s-version-278865 - found existing host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"}
	I0815 18:37:02.951376   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserved static IP address: 192.168.39.89
	I0815 18:37:02.951393   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting for SSH to be available...
	I0815 18:37:02.951424   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Getting to WaitForSSH function...
	I0815 18:37:02.953498   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.953804   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953927   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH client type: external
	I0815 18:37:02.953957   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa (-rw-------)
	I0815 18:37:02.953989   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:37:02.954001   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | About to run SSH command:
	I0815 18:37:02.954009   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | exit 0
	I0815 18:37:03.076431   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | SSH cmd err, output: <nil>: 
	I0815 18:37:03.076748   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetConfigRaw
	I0815 18:37:03.077325   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.079733   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080100   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.080132   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080332   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:37:03.080537   68713 machine.go:93] provisionDockerMachine start ...
	I0815 18:37:03.080554   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:03.080717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.082778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083140   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.083168   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083331   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.083482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083612   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083730   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.083881   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.084067   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.084078   68713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:37:03.188779   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:37:03.188813   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189045   68713 buildroot.go:166] provisioning hostname "old-k8s-version-278865"
	I0815 18:37:03.189069   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189284   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.191858   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192171   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.192192   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192328   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.192533   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192676   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192822   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.193015   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.193180   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.193192   68713 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-278865 && echo "old-k8s-version-278865" | sudo tee /etc/hostname
	I0815 18:37:03.313099   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-278865
	
	I0815 18:37:03.313129   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.315840   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316196   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.316226   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316378   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.316608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316760   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316885   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.317001   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.317184   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.317207   68713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-278865' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-278865/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-278865' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:37:03.429897   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:37:03.429934   68713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:37:03.429962   68713 buildroot.go:174] setting up certificates
	I0815 18:37:03.429972   68713 provision.go:84] configureAuth start
	I0815 18:37:03.429983   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.430274   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.432724   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433053   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.433083   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433212   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.435181   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435514   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.435543   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435657   68713 provision.go:143] copyHostCerts
	I0815 18:37:03.435715   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:37:03.435736   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:37:03.435804   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:37:03.435919   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:37:03.435929   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:37:03.435959   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:37:03.436045   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:37:03.436055   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:37:03.436082   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:37:03.436170   68713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-278865 san=[127.0.0.1 192.168.39.89 localhost minikube old-k8s-version-278865]
	I0815 18:37:03.604924   68713 provision.go:177] copyRemoteCerts
	I0815 18:37:03.604979   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:37:03.605003   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.607328   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607616   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.607634   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607821   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.608016   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.608171   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.608429   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:03.690560   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:37:03.714632   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 18:37:03.737805   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:37:03.762338   68713 provision.go:87] duration metric: took 332.353741ms to configureAuth
	I0815 18:37:03.762371   68713 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:37:03.762543   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:37:03.762608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.765626   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.765988   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.766018   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.766211   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.766380   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766574   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766712   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.766897   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.767053   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.767069   68713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:37:04.050635   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:37:04.050663   68713 machine.go:96] duration metric: took 970.113556ms to provisionDockerMachine
	I0815 18:37:04.050674   68713 start.go:293] postStartSetup for "old-k8s-version-278865" (driver="kvm2")
	I0815 18:37:04.050685   68713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:37:04.050717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.051048   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:37:04.051081   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.053709   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054095   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.054124   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054432   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.054622   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.054774   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.054914   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.139381   68713 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:37:04.145097   68713 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:37:04.145124   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:37:04.145201   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:37:04.145298   68713 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:37:04.145421   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:37:04.156166   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:04.181562   68713 start.go:296] duration metric: took 130.872499ms for postStartSetup
	I0815 18:37:04.181605   68713 fix.go:56] duration metric: took 19.879821037s for fixHost
	I0815 18:37:04.181629   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.184268   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184652   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.184682   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184917   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.185151   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185345   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185502   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.185677   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:04.185925   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:04.185938   68713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:37:04.297391   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747024.271483326
	
	I0815 18:37:04.297413   68713 fix.go:216] guest clock: 1723747024.271483326
	I0815 18:37:04.297423   68713 fix.go:229] Guest: 2024-08-15 18:37:04.271483326 +0000 UTC Remote: 2024-08-15 18:37:04.181610291 +0000 UTC m=+251.426055371 (delta=89.873035ms)
	I0815 18:37:04.297448   68713 fix.go:200] guest clock delta is within tolerance: 89.873035ms
	I0815 18:37:04.297455   68713 start.go:83] releasing machines lock for "old-k8s-version-278865", held for 19.99571173s
	I0815 18:37:04.297504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.297818   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:04.300970   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301425   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.301455   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301609   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302194   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302404   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302495   68713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:37:04.302545   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.302679   68713 ssh_runner.go:195] Run: cat /version.json
	I0815 18:37:04.302705   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.305673   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.305903   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306066   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306092   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306273   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306301   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306337   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306537   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306657   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306664   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306827   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306834   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.307009   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.409319   68713 ssh_runner.go:195] Run: systemctl --version
	I0815 18:37:04.415576   68713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:37:04.565772   68713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:37:04.571909   68713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:37:04.571996   68713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:37:04.588400   68713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:37:04.588427   68713 start.go:495] detecting cgroup driver to use...
	I0815 18:37:04.588528   68713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:37:04.604253   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:37:04.619003   68713 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:37:04.619051   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:37:04.632530   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:37:04.646080   68713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:37:04.763855   68713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:37:04.922470   68713 docker.go:233] disabling docker service ...
	I0815 18:37:04.922566   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:37:04.937301   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:37:04.950721   68713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:37:05.079767   68713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:37:05.210207   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:37:05.225569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:37:05.247998   68713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 18:37:05.248070   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.262851   68713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:37:05.262924   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.274489   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.285901   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.298749   68713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:37:05.310052   68713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:37:05.320992   68713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:37:05.321073   68713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:37:05.340323   68713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:37:05.354069   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:05.483573   68713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:37:05.647020   68713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:37:05.647094   68713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:37:05.653850   68713 start.go:563] Will wait 60s for crictl version
	I0815 18:37:05.653924   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:05.658476   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:37:05.697818   68713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:37:05.697907   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.724931   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.755831   68713 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 18:37:01.094934   68429 node_ready.go:53] node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:37:03.594364   68429 node_ready.go:53] node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:37:05.756950   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:05.759791   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760188   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:05.760220   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760468   68713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:37:05.764753   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:05.777462   68713 kubeadm.go:883] updating cluster {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:37:05.777614   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:37:05.777679   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:05.848895   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:05.848967   68713 ssh_runner.go:195] Run: which lz4
	I0815 18:37:05.853103   68713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:37:05.858012   68713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:37:05.858046   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 18:37:07.520567   68713 crio.go:462] duration metric: took 1.667489785s to copy over tarball
	I0815 18:37:07.520642   68713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:37:04.324093   67936 main.go:141] libmachine: (no-preload-599042) Calling .Start
	I0815 18:37:04.324263   67936 main.go:141] libmachine: (no-preload-599042) Ensuring networks are active...
	I0815 18:37:04.325099   67936 main.go:141] libmachine: (no-preload-599042) Ensuring network default is active
	I0815 18:37:04.325778   67936 main.go:141] libmachine: (no-preload-599042) Ensuring network mk-no-preload-599042 is active
	I0815 18:37:04.326007   67936 main.go:141] libmachine: (no-preload-599042) Getting domain xml...
	I0815 18:37:04.328184   67936 main.go:141] libmachine: (no-preload-599042) Creating domain...
	I0815 18:37:05.626206   67936 main.go:141] libmachine: (no-preload-599042) Waiting to get IP...
	I0815 18:37:05.627374   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:05.627877   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:05.627935   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:05.627844   69876 retry.go:31] will retry after 199.774188ms: waiting for machine to come up
	I0815 18:37:05.829673   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:05.830213   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:05.830240   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:05.830170   69876 retry.go:31] will retry after 255.850483ms: waiting for machine to come up
	I0815 18:37:06.087766   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:06.088378   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:06.088405   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:06.088330   69876 retry.go:31] will retry after 351.231421ms: waiting for machine to come up
	I0815 18:37:06.440937   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:06.441597   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:06.441626   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:06.441572   69876 retry.go:31] will retry after 602.620924ms: waiting for machine to come up
	I0815 18:37:07.046269   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:07.046745   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:07.046769   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:07.046712   69876 retry.go:31] will retry after 578.450642ms: waiting for machine to come up
	I0815 18:37:07.627330   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:07.627832   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:07.627859   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:07.627791   69876 retry.go:31] will retry after 731.331176ms: waiting for machine to come up
	I0815 18:37:08.361310   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:08.361746   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:08.361776   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:08.361706   69876 retry.go:31] will retry after 1.089237688s: waiting for machine to come up
	I0815 18:37:05.157378   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:07.162990   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:09.654672   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:06.093822   68429 node_ready.go:49] node "default-k8s-diff-port-423062" has status "Ready":"True"
	I0815 18:37:06.093853   68429 node_ready.go:38] duration metric: took 7.003558244s for node "default-k8s-diff-port-423062" to be "Ready" ...
	I0815 18:37:06.093867   68429 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:06.103462   68429 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.111214   68429 pod_ready.go:93] pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:06.111235   68429 pod_ready.go:82] duration metric: took 7.746382ms for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.111244   68429 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.117713   68429 pod_ready.go:93] pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:06.117739   68429 pod_ready.go:82] duration metric: took 6.487608ms for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.117750   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:08.126216   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:10.128095   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:10.534169   68713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.013498464s)
	I0815 18:37:10.534194   68713 crio.go:469] duration metric: took 3.013602868s to extract the tarball
	I0815 18:37:10.534201   68713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:37:10.578998   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:10.619043   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:10.619146   68713 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:37:10.619246   68713 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.619247   68713 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.619278   68713 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 18:37:10.619275   68713 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.619291   68713 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.619304   68713 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.619322   68713 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.619405   68713 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621367   68713 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.621384   68713 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 18:37:10.621468   68713 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.621500   68713 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.621596   68713 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.621646   68713 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621706   68713 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.621897   68713 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.798617   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.828530   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 18:37:10.859528   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.918714   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.977028   68713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 18:37:10.977073   68713 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.977119   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:10.980573   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.985503   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.990642   68713 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 18:37:10.990684   68713 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 18:37:10.990733   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.000388   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.007526   68713 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 18:37:11.007589   68713 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.007642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.008543   68713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 18:37:11.008581   68713 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.008621   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.008642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077224   68713 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 18:37:11.077269   68713 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077228   68713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 18:37:11.077347   68713 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.077371   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111299   68713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 18:37:11.111376   68713 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.111387   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.111421   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111471   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.156942   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.156944   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.156997   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.263355   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.263448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.263455   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.263544   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.291407   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.312626   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.334606   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.427937   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.433739   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:11.435371   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.439448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.439541   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 18:37:11.450901   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.477906   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.520009   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 18:37:11.572349   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 18:37:11.686243   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 18:37:11.686295   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 18:37:11.686325   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 18:37:11.686378   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 18:37:11.686420   68713 cache_images.go:92] duration metric: took 1.067250234s to LoadCachedImages
	W0815 18:37:11.686494   68713 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0815 18:37:11.686508   68713 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.20.0 crio true true} ...
	I0815 18:37:11.686620   68713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-278865 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:37:11.686693   68713 ssh_runner.go:195] Run: crio config
	I0815 18:37:11.736781   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:37:11.736808   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:11.736824   68713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:37:11.736851   68713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-278865 NodeName:old-k8s-version-278865 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 18:37:11.737039   68713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-278865"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:37:11.737120   68713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 18:37:11.747511   68713 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:37:11.747585   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:37:11.757850   68713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 18:37:11.775982   68713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:37:11.792938   68713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 18:37:11.811576   68713 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I0815 18:37:11.815708   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:11.829992   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:11.983884   68713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:12.002603   68713 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865 for IP: 192.168.39.89
	I0815 18:37:12.002632   68713 certs.go:194] generating shared ca certs ...
	I0815 18:37:12.002682   68713 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.002867   68713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:37:12.002926   68713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:37:12.002942   68713 certs.go:256] generating profile certs ...
	I0815 18:37:12.025160   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.key
	I0815 18:37:12.025296   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a
	I0815 18:37:12.025351   68713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key
	I0815 18:37:12.025516   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:37:12.025578   68713 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:37:12.025591   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:37:12.025627   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:37:12.025661   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:37:12.025691   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:37:12.025746   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:12.026614   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:37:12.066771   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:37:12.109649   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:37:12.176744   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:37:12.207990   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 18:37:12.244999   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:37:12.282338   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:37:12.308761   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:37:12.332316   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:37:12.355977   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:37:12.379169   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:37:12.405472   68713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:37:12.424110   68713 ssh_runner.go:195] Run: openssl version
	I0815 18:37:12.430231   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:37:12.441531   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.445971   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.446061   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.452134   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:37:12.466809   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:37:12.478211   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482659   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482708   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.490225   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:37:12.504908   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:37:12.516825   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521854   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521911   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.527884   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:37:12.539398   68713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:37:12.544010   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:37:12.549918   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:37:12.555714   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:37:12.561895   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:37:12.567736   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:37:12.573664   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:37:12.579510   68713 kubeadm.go:392] StartCluster: {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:37:12.579627   68713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:37:12.579688   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.621503   68713 cri.go:89] found id: ""
	I0815 18:37:12.621576   68713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:37:12.632722   68713 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:37:12.632746   68713 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:37:12.632796   68713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:37:12.643192   68713 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:37:12.644607   68713 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-278865" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:37:12.645629   68713 kubeconfig.go:62] /home/jenkins/minikube-integration/19450-13013/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-278865" cluster setting kubeconfig missing "old-k8s-version-278865" context setting]
	I0815 18:37:12.647073   68713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.653052   68713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:37:12.665777   68713 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.89
	I0815 18:37:12.665808   68713 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:37:12.665821   68713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:37:12.665872   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.713574   68713 cri.go:89] found id: ""
	I0815 18:37:12.713641   68713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:37:12.731459   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:37:12.741769   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:37:12.741789   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:37:12.741833   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:37:12.750990   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:37:12.751049   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:37:12.761621   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:37:12.771204   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:37:12.771261   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:37:12.782012   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:37:09.452971   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:09.453451   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:09.453494   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:09.453393   69876 retry.go:31] will retry after 1.35461204s: waiting for machine to come up
	I0815 18:37:10.809664   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:10.810127   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:10.810158   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:10.810065   69876 retry.go:31] will retry after 1.709820883s: waiting for machine to come up
	I0815 18:37:12.521458   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:12.521988   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:12.522016   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:12.521930   69876 retry.go:31] will retry after 1.401971708s: waiting for machine to come up
	I0815 18:37:13.925401   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:13.925868   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:13.925898   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:13.925824   69876 retry.go:31] will retry after 2.768002946s: waiting for machine to come up
	I0815 18:37:11.655451   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:14.154561   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:12.400960   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:13.128357   68429 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.128379   68429 pod_ready.go:82] duration metric: took 7.010621879s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.128389   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.136617   68429 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.136638   68429 pod_ready.go:82] duration metric: took 8.242471ms for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.136648   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.143530   68429 pod_ready.go:93] pod "kube-proxy-bnxv7" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.143551   68429 pod_ready.go:82] duration metric: took 6.895931ms for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.143563   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.151691   68429 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.151721   68429 pod_ready.go:82] duration metric: took 8.149821ms for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.151735   68429 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:15.158172   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:12.791928   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:37:12.791994   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:37:12.801858   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:37:12.811023   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:37:12.811083   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:37:12.822189   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:37:12.834293   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:12.974325   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.452192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.690442   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.798270   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.900783   68713 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:37:13.900877   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.401954   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.901809   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.401755   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.901010   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.401794   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.901149   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:17.401599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.694999   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:16.695488   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:16.695506   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:16.695430   69876 retry.go:31] will retry after 2.308386075s: waiting for machine to come up
	I0815 18:37:16.154692   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:18.653763   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:17.159197   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:19.159442   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:17.901511   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.401720   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.900976   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.401223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.901522   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.901573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:22.401279   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.005581   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:19.005979   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:19.006008   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:19.005930   69876 retry.go:31] will retry after 2.758801207s: waiting for machine to come up
	I0815 18:37:21.766860   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.767286   67936 main.go:141] libmachine: (no-preload-599042) Found IP for machine: 192.168.72.14
	I0815 18:37:21.767303   67936 main.go:141] libmachine: (no-preload-599042) Reserving static IP address...
	I0815 18:37:21.767314   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has current primary IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.767722   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "no-preload-599042", mac: "52:54:00:d1:54:6d", ip: "192.168.72.14"} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.767745   67936 main.go:141] libmachine: (no-preload-599042) Reserved static IP address: 192.168.72.14
	I0815 18:37:21.767757   67936 main.go:141] libmachine: (no-preload-599042) DBG | skip adding static IP to network mk-no-preload-599042 - found existing host DHCP lease matching {name: "no-preload-599042", mac: "52:54:00:d1:54:6d", ip: "192.168.72.14"}
	I0815 18:37:21.767768   67936 main.go:141] libmachine: (no-preload-599042) DBG | Getting to WaitForSSH function...
	I0815 18:37:21.767780   67936 main.go:141] libmachine: (no-preload-599042) Waiting for SSH to be available...
	I0815 18:37:21.769674   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.769950   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.769973   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.770072   67936 main.go:141] libmachine: (no-preload-599042) DBG | Using SSH client type: external
	I0815 18:37:21.770103   67936 main.go:141] libmachine: (no-preload-599042) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa (-rw-------)
	I0815 18:37:21.770134   67936 main.go:141] libmachine: (no-preload-599042) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:37:21.770147   67936 main.go:141] libmachine: (no-preload-599042) DBG | About to run SSH command:
	I0815 18:37:21.770162   67936 main.go:141] libmachine: (no-preload-599042) DBG | exit 0
	I0815 18:37:21.888536   67936 main.go:141] libmachine: (no-preload-599042) DBG | SSH cmd err, output: <nil>: 
	I0815 18:37:21.888900   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetConfigRaw
	I0815 18:37:21.889541   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:21.892351   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.892730   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.892760   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.892976   67936 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/config.json ...
	I0815 18:37:21.893181   67936 machine.go:93] provisionDockerMachine start ...
	I0815 18:37:21.893203   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:21.893404   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:21.895471   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.895774   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.895812   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.895967   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:21.896153   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.896334   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.896522   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:21.896697   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:21.896872   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:21.896884   67936 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:37:21.992598   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:37:21.992622   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:21.992856   67936 buildroot.go:166] provisioning hostname "no-preload-599042"
	I0815 18:37:21.992884   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:21.993095   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:21.995586   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.995902   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.995930   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.996051   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:21.996239   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.996375   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.996538   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:21.996691   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:21.996869   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:21.996884   67936 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-599042 && echo "no-preload-599042" | sudo tee /etc/hostname
	I0815 18:37:22.106513   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-599042
	
	I0815 18:37:22.106553   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.109655   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.110111   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.110143   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.110362   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.110548   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.110718   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.110838   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.110970   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.111141   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.111162   67936 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-599042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-599042/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-599042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:37:22.221858   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:37:22.221898   67936 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:37:22.221924   67936 buildroot.go:174] setting up certificates
	I0815 18:37:22.221938   67936 provision.go:84] configureAuth start
	I0815 18:37:22.221956   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:22.222278   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:22.225058   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.225374   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.225410   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.225544   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.227539   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.227885   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.227929   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.228052   67936 provision.go:143] copyHostCerts
	I0815 18:37:22.228111   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:37:22.228126   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:37:22.228190   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:37:22.228273   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:37:22.228282   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:37:22.228301   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:37:22.228352   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:37:22.228359   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:37:22.228375   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:37:22.228428   67936 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.no-preload-599042 san=[127.0.0.1 192.168.72.14 localhost minikube no-preload-599042]
	I0815 18:37:22.383520   67936 provision.go:177] copyRemoteCerts
	I0815 18:37:22.383578   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:37:22.383601   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.386048   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.386303   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.386338   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.386566   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.386722   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.386894   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.387036   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:22.470828   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:37:22.494929   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:37:22.519545   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 18:37:22.544417   67936 provision.go:87] duration metric: took 322.465732ms to configureAuth
	I0815 18:37:22.544442   67936 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:37:22.544661   67936 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:37:22.544736   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.547284   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.547610   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.547641   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.547876   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.548076   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.548271   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.548413   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.548594   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.548795   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.548818   67936 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:37:22.803896   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:37:22.803924   67936 machine.go:96] duration metric: took 910.728961ms to provisionDockerMachine
	I0815 18:37:22.803935   67936 start.go:293] postStartSetup for "no-preload-599042" (driver="kvm2")
	I0815 18:37:22.803945   67936 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:37:22.803959   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:22.804274   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:37:22.804322   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.807041   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.807437   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.807467   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.807570   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.807747   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.807906   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.808002   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:22.887667   67936 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:37:22.892368   67936 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:37:22.892393   67936 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:37:22.892480   67936 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:37:22.892588   67936 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:37:22.892681   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:37:22.901987   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:22.927782   67936 start.go:296] duration metric: took 123.834401ms for postStartSetup
	I0815 18:37:22.927823   67936 fix.go:56] duration metric: took 18.630196933s for fixHost
	I0815 18:37:22.927848   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.930378   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.930728   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.930755   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.930868   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.931043   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.931226   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.931386   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.931538   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.931705   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.931718   67936 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:37:23.029393   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747042.997661196
	
	I0815 18:37:23.029423   67936 fix.go:216] guest clock: 1723747042.997661196
	I0815 18:37:23.029433   67936 fix.go:229] Guest: 2024-08-15 18:37:22.997661196 +0000 UTC Remote: 2024-08-15 18:37:22.927828036 +0000 UTC m=+353.975665928 (delta=69.83316ms)
	I0815 18:37:23.029455   67936 fix.go:200] guest clock delta is within tolerance: 69.83316ms
	I0815 18:37:23.029465   67936 start.go:83] releasing machines lock for "no-preload-599042", held for 18.731874864s
	I0815 18:37:23.029491   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.029730   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:23.031885   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.032242   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.032261   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.032449   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.032908   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.033062   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.033149   67936 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:37:23.033197   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:23.033303   67936 ssh_runner.go:195] Run: cat /version.json
	I0815 18:37:23.033322   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:23.035943   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.035987   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036327   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.036433   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.036463   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036482   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036657   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:23.036836   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:23.036855   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:23.036966   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:23.037039   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:23.037119   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:23.037183   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:23.037242   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:23.117399   67936 ssh_runner.go:195] Run: systemctl --version
	I0815 18:37:23.138614   67936 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:37:23.287862   67936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:37:23.293943   67936 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:37:23.294013   67936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:37:23.310957   67936 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:37:23.310987   67936 start.go:495] detecting cgroup driver to use...
	I0815 18:37:23.311067   67936 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:37:23.326641   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:37:23.340650   67936 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:37:23.340708   67936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:37:23.355401   67936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:37:23.369033   67936 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:37:23.480891   67936 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:37:23.629690   67936 docker.go:233] disabling docker service ...
	I0815 18:37:23.629782   67936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:37:23.644372   67936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:37:23.658312   67936 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:37:23.779999   67936 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:37:23.902630   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:37:23.917453   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:37:23.935696   67936 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:37:23.935749   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.946031   67936 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:37:23.946106   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.956639   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.967148   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.978049   67936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:37:23.989000   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.999290   67936 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:24.017002   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:24.027432   67936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:37:24.036714   67936 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:37:24.036770   67936 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:37:24.048956   67936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:37:24.058269   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:24.173548   67936 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:37:24.316383   67936 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:37:24.316462   67936 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:37:24.321726   67936 start.go:563] Will wait 60s for crictl version
	I0815 18:37:24.321803   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.325718   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:37:24.362995   67936 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:37:24.363099   67936 ssh_runner.go:195] Run: crio --version
	I0815 18:37:24.392678   67936 ssh_runner.go:195] Run: crio --version
	I0815 18:37:24.424128   67936 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:37:20.654186   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:23.154893   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:21.658499   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:24.159865   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:22.901608   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.401519   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.901287   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.401831   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.901547   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.401220   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.901109   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.401441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.901515   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:27.401258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.425451   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:24.428263   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:24.428631   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:24.428656   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:24.428833   67936 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 18:37:24.433343   67936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:24.446011   67936 kubeadm.go:883] updating cluster {Name:no-preload-599042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:37:24.446123   67936 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:37:24.446168   67936 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:24.484321   67936 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:37:24.484346   67936 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:37:24.484417   67936 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:24.484429   67936 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.484444   67936 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.484470   67936 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.484472   67936 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.484581   67936 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.484583   67936 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 18:37:24.484585   67936 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.485836   67936 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.485844   67936 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 18:37:24.485852   67936 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.485846   67936 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.485836   67936 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.485837   67936 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.485846   67936 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:24.485906   67936 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.646217   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.653405   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.658441   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.662835   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.662858   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.681979   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.715361   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0815 18:37:24.722352   67936 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0815 18:37:24.722391   67936 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.722450   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.787439   67936 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0815 18:37:24.787486   67936 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.787530   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.810570   67936 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0815 18:37:24.810606   67936 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0815 18:37:24.810612   67936 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.810630   67936 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.810666   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.810667   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.841566   67936 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0815 18:37:24.841617   67936 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.841669   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.841698   67936 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0815 18:37:24.841743   67936 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.841800   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.950875   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.950918   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.950933   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.950989   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.951004   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.951052   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.079551   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:25.079601   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:25.079634   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:25.084852   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:25.084874   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.084910   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:25.216095   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:25.216235   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:25.216308   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:25.216384   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:25.216400   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.216431   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:25.336055   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 18:37:25.336126   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 18:37:25.336180   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.336222   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:25.336181   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 18:37:25.336320   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:25.352527   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 18:37:25.352566   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 18:37:25.352592   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0815 18:37:25.352639   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:25.352650   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:25.352702   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:25.355747   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0815 18:37:25.355764   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.355769   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0815 18:37:25.355797   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.355806   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0815 18:37:25.363222   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0815 18:37:25.363257   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0815 18:37:25.363435   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0815 18:37:25.476601   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:28.142118   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.786287506s)
	I0815 18:37:28.142134   67936 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.665496935s)
	I0815 18:37:28.142146   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0815 18:37:28.142177   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:28.142190   67936 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0815 18:37:28.142220   67936 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:28.142244   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:28.142259   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:25.155516   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:27.156071   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:29.157389   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:26.658491   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:28.659080   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:27.901777   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.401103   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.901746   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.401521   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.901691   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.401326   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.901672   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.401534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.901013   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:32.401696   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.598348   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.456076001s)
	I0815 18:37:29.598380   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0815 18:37:29.598404   67936 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:29.598407   67936 ssh_runner.go:235] Completed: which crictl: (1.456124508s)
	I0815 18:37:29.598451   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:29.598474   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:31.495864   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.897383444s)
	I0815 18:37:31.495897   67936 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.897403663s)
	I0815 18:37:31.495902   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0815 18:37:31.495931   67936 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:31.495968   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:31.495968   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:31.657799   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:34.156377   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:31.158308   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:33.159177   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:35.668218   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:32.901441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.901095   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.401705   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.901020   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.401019   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.901094   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.400952   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.901717   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:37.401701   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.526372   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.030374686s)
	I0815 18:37:35.526410   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0815 18:37:35.526422   67936 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.030343547s)
	I0815 18:37:35.526438   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:35.526482   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:35.526483   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:35.570806   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 18:37:35.570906   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:37.500059   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.973499408s)
	I0815 18:37:37.500098   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0815 18:37:37.500120   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:37.500072   67936 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.929150036s)
	I0815 18:37:37.500208   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0815 18:37:37.500161   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:36.157239   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:38.656856   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:38.158685   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:40.158728   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:37.901353   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.401426   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.901599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.401173   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.901593   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.401758   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.401698   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:42.401409   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.563532   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.063281797s)
	I0815 18:37:39.563562   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0815 18:37:39.563595   67936 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:39.563642   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:40.208180   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 18:37:40.208232   67936 cache_images.go:123] Successfully loaded all cached images
	I0815 18:37:40.208240   67936 cache_images.go:92] duration metric: took 15.723882738s to LoadCachedImages
	I0815 18:37:40.208252   67936 kubeadm.go:934] updating node { 192.168.72.14 8443 v1.31.0 crio true true} ...
	I0815 18:37:40.208416   67936 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-599042 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:37:40.208544   67936 ssh_runner.go:195] Run: crio config
	I0815 18:37:40.261526   67936 cni.go:84] Creating CNI manager for ""
	I0815 18:37:40.261545   67936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:40.261552   67936 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:37:40.261572   67936 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.14 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-599042 NodeName:no-preload-599042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:37:40.261688   67936 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-599042"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:37:40.261742   67936 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:37:40.271844   67936 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:37:40.271921   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:37:40.280957   67936 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0815 18:37:40.297378   67936 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:37:40.313215   67936 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0815 18:37:40.329640   67936 ssh_runner.go:195] Run: grep 192.168.72.14	control-plane.minikube.internal$ /etc/hosts
	I0815 18:37:40.333331   67936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:40.344805   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:40.457352   67936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:40.475219   67936 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042 for IP: 192.168.72.14
	I0815 18:37:40.475238   67936 certs.go:194] generating shared ca certs ...
	I0815 18:37:40.475252   67936 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:40.475416   67936 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:37:40.475475   67936 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:37:40.475489   67936 certs.go:256] generating profile certs ...
	I0815 18:37:40.475591   67936 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.key
	I0815 18:37:40.475670   67936 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.key.15ba6898
	I0815 18:37:40.475714   67936 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.key
	I0815 18:37:40.475865   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:37:40.475904   67936 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:37:40.475917   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:37:40.475950   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:37:40.475978   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:37:40.476012   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:37:40.476069   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:40.476738   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:37:40.513554   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:37:40.549095   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:37:40.578010   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:37:40.612637   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 18:37:40.639974   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 18:37:40.672937   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:37:40.696889   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 18:37:40.721258   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:37:40.744104   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:37:40.766463   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:37:40.788628   67936 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:37:40.805346   67936 ssh_runner.go:195] Run: openssl version
	I0815 18:37:40.811193   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:37:40.822610   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.826918   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.826969   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.832544   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:37:40.843338   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:37:40.854032   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.858512   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.858563   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.864247   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:37:40.874724   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:37:40.885538   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.889849   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.889899   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.895258   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:37:40.906841   67936 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:37:40.911629   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:37:40.918085   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:37:40.924194   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:37:40.930009   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:37:40.935634   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:37:40.941168   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:37:40.946761   67936 kubeadm.go:392] StartCluster: {Name:no-preload-599042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:37:40.946836   67936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:37:40.946874   67936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:40.990733   67936 cri.go:89] found id: ""
	I0815 18:37:40.990808   67936 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:37:41.002969   67936 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:37:41.002988   67936 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:37:41.003041   67936 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:37:41.013722   67936 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:37:41.015079   67936 kubeconfig.go:125] found "no-preload-599042" server: "https://192.168.72.14:8443"
	I0815 18:37:41.017905   67936 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:37:41.029240   67936 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.14
	I0815 18:37:41.029271   67936 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:37:41.029284   67936 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:37:41.029326   67936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:41.064689   67936 cri.go:89] found id: ""
	I0815 18:37:41.064754   67936 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:37:41.085195   67936 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:37:41.096355   67936 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:37:41.096375   67936 kubeadm.go:157] found existing configuration files:
	
	I0815 18:37:41.096425   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:37:41.106887   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:37:41.106941   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:37:41.117599   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:37:41.127956   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:37:41.128020   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:37:41.137384   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:37:41.146075   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:37:41.146122   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:37:41.156417   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:37:41.165287   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:37:41.165325   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:37:41.174245   67936 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:37:41.183335   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:41.314804   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.422591   67936 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.107749325s)
	I0815 18:37:42.422628   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.642065   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.710265   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.791233   67936 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:37:42.791334   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.291538   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.791682   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.831611   67936 api_server.go:72] duration metric: took 1.040390925s to wait for apiserver process to appear ...
	I0815 18:37:43.831641   67936 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:37:43.831662   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:43.832110   67936 api_server.go:269] stopped: https://192.168.72.14:8443/healthz: Get "https://192.168.72.14:8443/healthz": dial tcp 192.168.72.14:8443: connect: connection refused
	I0815 18:37:41.154701   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:43.655756   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:42.661385   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:45.158918   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:42.901106   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.401146   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.901869   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.401483   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.901302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.401505   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.901504   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.401025   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.901713   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:47.401588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.332554   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.112640   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:37:47.112668   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:37:47.112681   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.244211   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.244246   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:47.332375   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.339129   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.339153   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:47.831731   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.836308   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.836330   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:48.331914   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:48.336314   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:48.336347   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:48.831862   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:48.836012   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 200:
	ok
	I0815 18:37:48.842971   67936 api_server.go:141] control plane version: v1.31.0
	I0815 18:37:48.842996   67936 api_server.go:131] duration metric: took 5.011346791s to wait for apiserver health ...
	I0815 18:37:48.843008   67936 cni.go:84] Creating CNI manager for ""
	I0815 18:37:48.843015   67936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:48.844939   67936 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:37:48.846262   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:37:48.857335   67936 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:37:48.876370   67936 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:37:48.886582   67936 system_pods.go:59] 8 kube-system pods found
	I0815 18:37:48.886628   67936 system_pods.go:61] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:37:48.886640   67936 system_pods.go:61] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:37:48.886653   67936 system_pods.go:61] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:37:48.886666   67936 system_pods.go:61] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:37:48.886679   67936 system_pods.go:61] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 18:37:48.886691   67936 system_pods.go:61] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:37:48.886701   67936 system_pods.go:61] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:37:48.886711   67936 system_pods.go:61] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 18:37:48.886722   67936 system_pods.go:74] duration metric: took 10.329234ms to wait for pod list to return data ...
	I0815 18:37:48.886736   67936 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:37:48.890525   67936 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:37:48.890560   67936 node_conditions.go:123] node cpu capacity is 2
	I0815 18:37:48.890571   67936 node_conditions.go:105] duration metric: took 3.828616ms to run NodePressure ...
	I0815 18:37:48.890590   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:46.155548   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:48.655549   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:49.183845   67936 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:37:49.188602   67936 kubeadm.go:739] kubelet initialised
	I0815 18:37:49.188629   67936 kubeadm.go:740] duration metric: took 4.755371ms waiting for restarted kubelet to initialise ...
	I0815 18:37:49.188639   67936 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:49.193101   67936 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.199195   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.199215   67936 pod_ready.go:82] duration metric: took 6.088761ms for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.199226   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.199236   67936 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.205076   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "etcd-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.205095   67936 pod_ready.go:82] duration metric: took 5.848521ms for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.205105   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "etcd-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.205111   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.210559   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-apiserver-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.210578   67936 pod_ready.go:82] duration metric: took 5.449861ms for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.210587   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-apiserver-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.210594   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.281799   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.281828   67936 pod_ready.go:82] duration metric: took 71.206144ms for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.281840   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.281850   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.680097   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-proxy-bwb9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.680121   67936 pod_ready.go:82] duration metric: took 398.261641ms for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.680131   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-proxy-bwb9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.680136   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:50.080391   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-scheduler-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.080415   67936 pod_ready.go:82] duration metric: took 400.272871ms for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:50.080425   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-scheduler-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.080430   67936 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:50.482715   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.482744   67936 pod_ready.go:82] duration metric: took 402.304556ms for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:50.482753   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.482761   67936 pod_ready.go:39] duration metric: took 1.294109816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:50.482779   67936 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:37:50.495888   67936 ops.go:34] apiserver oom_adj: -16
	I0815 18:37:50.495912   67936 kubeadm.go:597] duration metric: took 9.4929178s to restartPrimaryControlPlane
	I0815 18:37:50.495924   67936 kubeadm.go:394] duration metric: took 9.549167115s to StartCluster
	I0815 18:37:50.495943   67936 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:50.496020   67936 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:37:50.497743   67936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:50.497976   67936 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:37:50.498166   67936 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:37:50.498225   67936 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:37:50.498251   67936 addons.go:69] Setting storage-provisioner=true in profile "no-preload-599042"
	I0815 18:37:50.498266   67936 addons.go:69] Setting default-storageclass=true in profile "no-preload-599042"
	I0815 18:37:50.498287   67936 addons.go:234] Setting addon storage-provisioner=true in "no-preload-599042"
	I0815 18:37:50.498303   67936 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-599042"
	W0815 18:37:50.498311   67936 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:37:50.498343   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.498708   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.498733   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.498745   67936 addons.go:69] Setting metrics-server=true in profile "no-preload-599042"
	I0815 18:37:50.498753   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.498783   67936 addons.go:234] Setting addon metrics-server=true in "no-preload-599042"
	W0815 18:37:50.498795   67936 addons.go:243] addon metrics-server should already be in state true
	I0815 18:37:50.498734   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.499070   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.499350   67936 out.go:177] * Verifying Kubernetes components...
	I0815 18:37:50.499436   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.499467   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.500629   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:50.514727   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
	I0815 18:37:50.514956   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0815 18:37:50.515112   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.515379   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.515622   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.515639   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.515844   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.515866   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.516028   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.516697   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.516741   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.516854   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.517455   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.517487   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.517879   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0815 18:37:50.518180   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.518645   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.518666   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.518975   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.519155   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.522283   67936 addons.go:234] Setting addon default-storageclass=true in "no-preload-599042"
	W0815 18:37:50.522301   67936 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:37:50.522321   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.522589   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.522616   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.533306   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I0815 18:37:50.533891   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.534378   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.534403   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.535077   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.535251   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.536333   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I0815 18:37:50.536960   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.537421   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.537484   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.537500   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.537581   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40905
	I0815 18:37:50.537832   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.537992   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.538044   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.538964   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.538983   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.539442   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.539494   67936 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:37:50.540127   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.540138   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.540166   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.540633   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:37:50.540653   67936 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:37:50.540673   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.541641   67936 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:47.658449   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:50.159642   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:50.542848   67936 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:37:50.542867   67936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:37:50.542883   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.544059   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.544644   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.544669   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.544879   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.545056   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.545226   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.545363   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.545609   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.545957   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.545984   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.546188   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.546350   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.546459   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.546563   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.576049   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0815 18:37:50.576398   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.576963   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.576991   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.577315   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.577536   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.579041   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.579244   67936 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:37:50.579259   67936 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:37:50.579273   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.583471   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.583857   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.583884   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.583984   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.584140   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.584298   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.584431   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.711232   67936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:50.738297   67936 node_ready.go:35] waiting up to 6m0s for node "no-preload-599042" to be "Ready" ...
	I0815 18:37:50.787041   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:37:50.876459   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:37:50.926707   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:37:50.926727   67936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:37:50.967734   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:37:50.967764   67936 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:37:50.994557   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:50.994580   67936 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:37:51.018573   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:51.217167   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.217199   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.217511   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.217561   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.217570   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.217579   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.217592   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.217846   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.217889   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.217900   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.223755   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.223774   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.224006   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.224024   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.794888   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.794919   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.795198   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.795227   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.795240   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.795256   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.795267   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.795503   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.795521   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936158   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.936178   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.936438   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.936467   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.936505   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936519   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.936528   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.936754   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.936773   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936785   67936 addons.go:475] Verifying addon metrics-server=true in "no-preload-599042"
	I0815 18:37:51.938619   67936 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0815 18:37:47.901026   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.401023   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.901661   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.401358   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.901410   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.401040   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.901695   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.401365   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.901733   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:52.401439   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.939743   67936 addons.go:510] duration metric: took 1.441583595s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0815 18:37:52.742152   67936 node_ready.go:53] node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:51.155350   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:53.654487   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:52.658151   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:54.658269   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:52.901361   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.401417   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.901380   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.401820   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.901113   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.401270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.900941   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.901834   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:57.401496   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.242506   67936 node_ready.go:53] node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:57.742723   67936 node_ready.go:49] node "no-preload-599042" has status "Ready":"True"
	I0815 18:37:57.742746   67936 node_ready.go:38] duration metric: took 7.00442012s for node "no-preload-599042" to be "Ready" ...
	I0815 18:37:57.742764   67936 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:57.747927   67936 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:57.752478   67936 pod_ready.go:93] pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:57.752513   67936 pod_ready.go:82] duration metric: took 4.560553ms for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:57.752524   67936 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.760896   67936 pod_ready.go:93] pod "etcd-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.760924   67936 pod_ready.go:82] duration metric: took 1.008390436s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.760937   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.774529   67936 pod_ready.go:93] pod "kube-apiserver-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.774557   67936 pod_ready.go:82] duration metric: took 13.611063ms for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.774568   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.793851   67936 pod_ready.go:93] pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.793873   67936 pod_ready.go:82] duration metric: took 19.297089ms for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.793885   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.943096   67936 pod_ready.go:93] pod "kube-proxy-bwb9h" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.943120   67936 pod_ready.go:82] duration metric: took 149.227014ms for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.943129   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:56.154874   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:58.655280   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:57.158586   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:59.159257   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:57.901938   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.401246   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.900950   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.400984   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.401707   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.901455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.901613   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:02.401302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.342426   67936 pod_ready.go:93] pod "kube-scheduler-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:59.342447   67936 pod_ready.go:82] duration metric: took 399.312035ms for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:59.342460   67936 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	I0815 18:38:01.349419   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:03.848558   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:01.154194   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:03.154779   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:01.658502   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:04.158895   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:02.901914   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.401357   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.901258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.400961   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.401852   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.901115   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.401170   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.901694   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:07.401816   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.849586   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:08.349057   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:05.155847   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:07.653607   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:09.654245   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:06.658092   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:08.659361   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:07.900966   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.401136   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.901534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.400982   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.901126   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.401120   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.901175   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.401704   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.901710   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:12.401712   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.349443   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:12.349942   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:11.655212   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:14.154508   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:11.158562   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:13.657985   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:15.658088   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:12.901680   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.401532   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.901198   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:13.901295   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:13.938743   68713 cri.go:89] found id: ""
	I0815 18:38:13.938770   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.938778   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:13.938786   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:13.938843   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:13.971997   68713 cri.go:89] found id: ""
	I0815 18:38:13.972029   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.972041   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:13.972048   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:13.972111   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:14.006793   68713 cri.go:89] found id: ""
	I0815 18:38:14.006825   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.006836   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:14.006844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:14.006903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:14.041546   68713 cri.go:89] found id: ""
	I0815 18:38:14.041575   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.041587   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:14.041595   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:14.041680   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:14.077614   68713 cri.go:89] found id: ""
	I0815 18:38:14.077639   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.077648   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:14.077653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:14.077704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:14.113683   68713 cri.go:89] found id: ""
	I0815 18:38:14.113711   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.113721   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:14.113730   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:14.113790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:14.149581   68713 cri.go:89] found id: ""
	I0815 18:38:14.149608   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.149616   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:14.149622   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:14.149678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:14.191576   68713 cri.go:89] found id: ""
	I0815 18:38:14.191606   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.191614   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:14.191622   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:14.191635   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:14.243253   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:14.243287   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:14.256818   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:14.256845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:14.382914   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:14.382933   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:14.382948   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:14.461826   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:14.461859   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.005615   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:17.020977   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:17.021042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:17.070191   68713 cri.go:89] found id: ""
	I0815 18:38:17.070220   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.070232   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:17.070239   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:17.070301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:17.118582   68713 cri.go:89] found id: ""
	I0815 18:38:17.118612   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.118624   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:17.118631   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:17.118693   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:17.165380   68713 cri.go:89] found id: ""
	I0815 18:38:17.165404   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.165413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:17.165421   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:17.165483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:17.204630   68713 cri.go:89] found id: ""
	I0815 18:38:17.204660   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.204670   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:17.204678   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:17.204740   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:17.239182   68713 cri.go:89] found id: ""
	I0815 18:38:17.239210   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.239219   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:17.239226   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:17.239285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:17.276329   68713 cri.go:89] found id: ""
	I0815 18:38:17.276356   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.276367   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:17.276375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:17.276472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:17.312387   68713 cri.go:89] found id: ""
	I0815 18:38:17.312418   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.312427   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:17.312433   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:17.312485   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:17.348277   68713 cri.go:89] found id: ""
	I0815 18:38:17.348300   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.348308   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:17.348315   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:17.348334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:17.424886   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:17.424924   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.465491   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:17.465518   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:17.517687   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:17.517719   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:17.531928   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:17.531959   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:17.606987   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:14.849001   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:17.349912   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:16.155496   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:18.653621   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:18.159850   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.658717   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.107740   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:20.123194   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:20.123255   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:20.163586   68713 cri.go:89] found id: ""
	I0815 18:38:20.163608   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.163619   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:20.163627   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:20.163676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:20.200171   68713 cri.go:89] found id: ""
	I0815 18:38:20.200196   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.200204   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:20.200210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:20.200270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:20.234739   68713 cri.go:89] found id: ""
	I0815 18:38:20.234770   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.234781   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:20.234788   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:20.234849   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:20.270182   68713 cri.go:89] found id: ""
	I0815 18:38:20.270206   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.270215   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:20.270220   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:20.270281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:20.303643   68713 cri.go:89] found id: ""
	I0815 18:38:20.303672   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.303682   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:20.303690   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:20.303748   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:20.339399   68713 cri.go:89] found id: ""
	I0815 18:38:20.339431   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.339441   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:20.339449   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:20.339511   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:20.377220   68713 cri.go:89] found id: ""
	I0815 18:38:20.377245   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.377252   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:20.377258   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:20.377310   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:20.411202   68713 cri.go:89] found id: ""
	I0815 18:38:20.411238   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.411249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:20.411268   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:20.411282   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:20.462846   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:20.462879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:20.476569   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:20.476597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:20.554243   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:20.554269   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:20.554285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:20.637450   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:20.637493   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:19.849194   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:21.849502   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.655378   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.154633   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.160747   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:25.658706   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.182633   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:23.196953   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:23.197026   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:23.232011   68713 cri.go:89] found id: ""
	I0815 18:38:23.232039   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.232051   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:23.232064   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:23.232114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:23.266963   68713 cri.go:89] found id: ""
	I0815 18:38:23.266992   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.267000   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:23.267006   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:23.267069   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:23.306473   68713 cri.go:89] found id: ""
	I0815 18:38:23.306496   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.306504   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:23.306510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:23.306574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:23.343542   68713 cri.go:89] found id: ""
	I0815 18:38:23.343574   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.343585   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:23.343592   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:23.343652   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:23.382468   68713 cri.go:89] found id: ""
	I0815 18:38:23.382527   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.382539   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:23.382547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:23.382612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:23.418857   68713 cri.go:89] found id: ""
	I0815 18:38:23.418882   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.418891   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:23.418897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:23.418956   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:23.460971   68713 cri.go:89] found id: ""
	I0815 18:38:23.461004   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.461016   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:23.461023   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:23.461100   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:23.494139   68713 cri.go:89] found id: ""
	I0815 18:38:23.494172   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.494183   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:23.494194   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:23.494208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:23.547874   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:23.547908   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:23.562251   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:23.562278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:23.636503   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:23.636528   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:23.636545   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:23.716020   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:23.716051   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.255081   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:26.270118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:26.270184   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:26.308586   68713 cri.go:89] found id: ""
	I0815 18:38:26.308612   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.308623   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:26.308630   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:26.308688   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:26.344364   68713 cri.go:89] found id: ""
	I0815 18:38:26.344394   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.344410   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:26.344418   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:26.344533   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:26.381621   68713 cri.go:89] found id: ""
	I0815 18:38:26.381642   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.381650   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:26.381655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:26.381699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:26.416091   68713 cri.go:89] found id: ""
	I0815 18:38:26.416118   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.416128   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:26.416136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:26.416195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:26.456038   68713 cri.go:89] found id: ""
	I0815 18:38:26.456068   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.456080   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:26.456088   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:26.456151   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:26.490728   68713 cri.go:89] found id: ""
	I0815 18:38:26.490758   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.490769   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:26.490776   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:26.490837   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:26.529388   68713 cri.go:89] found id: ""
	I0815 18:38:26.529422   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.529434   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:26.529440   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:26.529489   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:26.567452   68713 cri.go:89] found id: ""
	I0815 18:38:26.567475   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.567484   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:26.567491   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:26.567503   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:26.641841   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:26.641863   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:26.641879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:26.719403   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:26.719438   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.760460   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:26.760507   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:26.814450   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:26.814480   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:24.349319   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:26.850207   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:25.155213   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:27.654265   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:29.656816   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:27.663849   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:30.158417   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:29.329451   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:29.344634   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:29.344706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:29.379278   68713 cri.go:89] found id: ""
	I0815 18:38:29.379308   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.379319   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:29.379326   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:29.379385   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:29.411854   68713 cri.go:89] found id: ""
	I0815 18:38:29.411881   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.411891   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:29.411898   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:29.411965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:29.443975   68713 cri.go:89] found id: ""
	I0815 18:38:29.444004   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.444014   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:29.444022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:29.444081   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:29.477919   68713 cri.go:89] found id: ""
	I0815 18:38:29.477944   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.477954   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:29.477962   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:29.478020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:29.518944   68713 cri.go:89] found id: ""
	I0815 18:38:29.518973   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.518985   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:29.518992   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:29.519052   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:29.553876   68713 cri.go:89] found id: ""
	I0815 18:38:29.553903   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.553913   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:29.553921   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:29.553974   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:29.590768   68713 cri.go:89] found id: ""
	I0815 18:38:29.590804   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.590815   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:29.590823   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:29.590879   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:29.625553   68713 cri.go:89] found id: ""
	I0815 18:38:29.625578   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.625586   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:29.625595   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:29.625606   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:29.668447   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:29.668478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:29.721002   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:29.721035   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:29.734955   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:29.734983   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:29.808703   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:29.808726   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:29.808742   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.397781   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:32.413876   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:32.413937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:32.453689   68713 cri.go:89] found id: ""
	I0815 18:38:32.453720   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.453777   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:32.453791   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:32.453839   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:32.490529   68713 cri.go:89] found id: ""
	I0815 18:38:32.490559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.490567   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:32.490573   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:32.490622   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:32.527680   68713 cri.go:89] found id: ""
	I0815 18:38:32.527710   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.527720   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:32.527727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:32.527790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:32.564619   68713 cri.go:89] found id: ""
	I0815 18:38:32.564656   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.564667   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:32.564677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:32.564745   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:32.600530   68713 cri.go:89] found id: ""
	I0815 18:38:32.600559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.600570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:32.600577   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:32.600639   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:32.636779   68713 cri.go:89] found id: ""
	I0815 18:38:32.636813   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.636821   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:32.636828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:32.636897   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:32.673743   68713 cri.go:89] found id: ""
	I0815 18:38:32.673774   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.673786   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:32.673794   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:32.673853   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:32.709678   68713 cri.go:89] found id: ""
	I0815 18:38:32.709708   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.709719   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:32.709730   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:32.709744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.785961   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:32.785998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:29.349763   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:31.350398   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:33.848873   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.155992   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:34.654825   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.159855   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:34.657783   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.828205   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:32.828237   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:32.894624   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:32.894666   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:32.910742   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:32.910769   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:32.980853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:35.481438   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:35.495373   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:35.495444   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:35.529184   68713 cri.go:89] found id: ""
	I0815 18:38:35.529212   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.529221   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:35.529226   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:35.529275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:35.565188   68713 cri.go:89] found id: ""
	I0815 18:38:35.565214   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.565221   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:35.565227   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:35.565281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:35.600386   68713 cri.go:89] found id: ""
	I0815 18:38:35.600416   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.600428   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:35.600435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:35.600519   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:35.634255   68713 cri.go:89] found id: ""
	I0815 18:38:35.634278   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.634287   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:35.634293   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:35.634339   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:35.670236   68713 cri.go:89] found id: ""
	I0815 18:38:35.670260   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.670268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:35.670273   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:35.670354   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:35.707691   68713 cri.go:89] found id: ""
	I0815 18:38:35.707714   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.707722   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:35.707727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:35.707782   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:35.745791   68713 cri.go:89] found id: ""
	I0815 18:38:35.745820   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.745832   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:35.745844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:35.745916   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:35.784167   68713 cri.go:89] found id: ""
	I0815 18:38:35.784195   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.784205   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:35.784217   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:35.784234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:35.864681   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:35.864711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:35.906831   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:35.906858   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:35.960328   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:35.960366   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:35.974401   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:35.974428   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:36.044789   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:35.849744   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:38.348058   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:36.654916   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:39.155585   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:36.658767   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:39.159236   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:38.545951   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:38.561473   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:38.561540   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:38.597621   68713 cri.go:89] found id: ""
	I0815 18:38:38.597658   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.597668   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:38.597679   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:38.597756   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:38.632081   68713 cri.go:89] found id: ""
	I0815 18:38:38.632115   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.632127   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:38.632135   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:38.632203   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:38.669917   68713 cri.go:89] found id: ""
	I0815 18:38:38.669944   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.669952   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:38.669958   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:38.670015   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:38.707552   68713 cri.go:89] found id: ""
	I0815 18:38:38.707574   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.707582   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:38.707588   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:38.707642   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:38.746054   68713 cri.go:89] found id: ""
	I0815 18:38:38.746082   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.746093   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:38.746101   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:38.746166   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:38.783901   68713 cri.go:89] found id: ""
	I0815 18:38:38.783933   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.783945   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:38.783952   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:38.784018   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:38.825411   68713 cri.go:89] found id: ""
	I0815 18:38:38.825441   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.825452   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:38.825459   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:38.825520   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:38.863174   68713 cri.go:89] found id: ""
	I0815 18:38:38.863219   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.863231   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:38.863241   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:38.863254   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:38.914016   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:38.914045   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:38.927634   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:38.927659   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:38.993380   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:38.993407   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:38.993422   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:39.077075   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:39.077116   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:41.620219   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:41.633572   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:41.633628   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:41.670330   68713 cri.go:89] found id: ""
	I0815 18:38:41.670351   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.670358   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:41.670364   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:41.670418   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:41.706467   68713 cri.go:89] found id: ""
	I0815 18:38:41.706494   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.706502   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:41.706508   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:41.706564   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:41.742915   68713 cri.go:89] found id: ""
	I0815 18:38:41.742958   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.742970   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:41.742978   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:41.743044   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:41.778650   68713 cri.go:89] found id: ""
	I0815 18:38:41.778679   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.778687   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:41.778692   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:41.778739   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:41.813329   68713 cri.go:89] found id: ""
	I0815 18:38:41.813358   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.813369   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:41.813375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:41.813427   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:41.851351   68713 cri.go:89] found id: ""
	I0815 18:38:41.851383   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.851391   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:41.851398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:41.851460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:41.895097   68713 cri.go:89] found id: ""
	I0815 18:38:41.895130   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.895142   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:41.895150   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:41.895209   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:41.931306   68713 cri.go:89] found id: ""
	I0815 18:38:41.931336   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.931353   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:41.931365   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:41.931381   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:41.944796   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:41.944828   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:42.018868   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:42.018891   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:42.018903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:42.104304   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:42.104340   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:42.143625   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:42.143655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:40.349197   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:42.850034   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:41.655478   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:44.155025   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:41.159976   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:43.658013   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:45.658358   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:44.698568   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:44.712171   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:44.712247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:44.747043   68713 cri.go:89] found id: ""
	I0815 18:38:44.747071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.747079   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:44.747085   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:44.747143   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:44.782660   68713 cri.go:89] found id: ""
	I0815 18:38:44.782691   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.782703   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:44.782711   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:44.782765   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:44.821111   68713 cri.go:89] found id: ""
	I0815 18:38:44.821138   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.821146   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:44.821152   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:44.821222   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:44.859602   68713 cri.go:89] found id: ""
	I0815 18:38:44.859635   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.859647   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:44.859655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:44.859717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:44.895037   68713 cri.go:89] found id: ""
	I0815 18:38:44.895071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.895083   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:44.895090   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:44.895175   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:44.928729   68713 cri.go:89] found id: ""
	I0815 18:38:44.928759   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.928771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:44.928781   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:44.928844   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:44.963945   68713 cri.go:89] found id: ""
	I0815 18:38:44.963977   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.963987   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:44.963996   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:44.964060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:45.001166   68713 cri.go:89] found id: ""
	I0815 18:38:45.001195   68713 logs.go:276] 0 containers: []
	W0815 18:38:45.001206   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:45.001218   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:45.001234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:45.015181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:45.015209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:45.084297   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:45.084322   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:45.084334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:45.173833   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:45.173866   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:45.211863   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:45.211899   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:47.771009   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:47.784865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:47.784926   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:44.850332   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.347985   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:46.654797   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:48.654936   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.658823   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:50.178115   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.818497   68713 cri.go:89] found id: ""
	I0815 18:38:47.818526   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.818538   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:47.818545   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:47.818608   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:47.857900   68713 cri.go:89] found id: ""
	I0815 18:38:47.857927   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.857935   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:47.857941   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:47.857997   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:47.895778   68713 cri.go:89] found id: ""
	I0815 18:38:47.895809   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.895822   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:47.895829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:47.895887   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:47.937410   68713 cri.go:89] found id: ""
	I0815 18:38:47.937434   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.937442   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:47.937448   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:47.937505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:47.976414   68713 cri.go:89] found id: ""
	I0815 18:38:47.976442   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.976450   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:47.976455   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:47.976525   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:48.014863   68713 cri.go:89] found id: ""
	I0815 18:38:48.014891   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.014899   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:48.014906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:48.014969   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:48.053508   68713 cri.go:89] found id: ""
	I0815 18:38:48.053536   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.053546   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:48.053554   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:48.053624   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:48.088900   68713 cri.go:89] found id: ""
	I0815 18:38:48.088931   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.088943   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:48.088954   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:48.088969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:48.140415   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:48.140447   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:48.155958   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:48.155985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:48.229338   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:48.229368   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:48.229383   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:48.317776   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:48.317814   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:50.860592   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:50.877070   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:50.877154   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:50.937590   68713 cri.go:89] found id: ""
	I0815 18:38:50.937615   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.937622   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:50.937628   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:50.937687   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:50.972573   68713 cri.go:89] found id: ""
	I0815 18:38:50.972603   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.972614   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:50.972622   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:50.972683   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:51.008786   68713 cri.go:89] found id: ""
	I0815 18:38:51.008811   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.008820   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:51.008826   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:51.008875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:51.043076   68713 cri.go:89] found id: ""
	I0815 18:38:51.043105   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.043116   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:51.043123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:51.043186   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:51.078344   68713 cri.go:89] found id: ""
	I0815 18:38:51.078379   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.078391   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:51.078398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:51.078453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:51.114494   68713 cri.go:89] found id: ""
	I0815 18:38:51.114521   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.114532   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:51.114540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:51.114600   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:51.153871   68713 cri.go:89] found id: ""
	I0815 18:38:51.153898   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.153909   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:51.153917   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:51.153984   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:51.187908   68713 cri.go:89] found id: ""
	I0815 18:38:51.187937   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.187948   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:51.187959   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:51.187974   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:51.264172   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:51.264198   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:51.264214   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:51.345238   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:51.345285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:51.385720   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:51.385745   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:51.443313   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:51.443353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:49.849156   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:52.348628   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:51.154188   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:53.155256   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:52.658773   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:54.659127   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:53.959176   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:53.972031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:53.972101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:54.010673   68713 cri.go:89] found id: ""
	I0815 18:38:54.010699   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.010707   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:54.010714   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:54.010775   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:54.045632   68713 cri.go:89] found id: ""
	I0815 18:38:54.045662   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.045672   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:54.045678   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:54.045727   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:54.082111   68713 cri.go:89] found id: ""
	I0815 18:38:54.082134   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.082142   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:54.082148   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:54.082206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:54.118210   68713 cri.go:89] found id: ""
	I0815 18:38:54.118232   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.118239   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:54.118246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:54.118305   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:54.155474   68713 cri.go:89] found id: ""
	I0815 18:38:54.155499   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.155508   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:54.155515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:54.155572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:54.193263   68713 cri.go:89] found id: ""
	I0815 18:38:54.193298   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.193305   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:54.193311   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:54.193365   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:54.233389   68713 cri.go:89] found id: ""
	I0815 18:38:54.233416   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.233428   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:54.233435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:54.233502   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:54.266127   68713 cri.go:89] found id: ""
	I0815 18:38:54.266155   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.266164   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:54.266176   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:54.266199   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:54.318724   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:54.318762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:54.332993   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:54.333022   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:54.405895   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:54.405915   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:54.405926   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:54.485819   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:54.485875   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.024956   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:57.038182   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:57.038246   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:57.078020   68713 cri.go:89] found id: ""
	I0815 18:38:57.078044   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.078055   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:57.078063   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:57.078127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:57.115077   68713 cri.go:89] found id: ""
	I0815 18:38:57.115101   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.115110   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:57.115118   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:57.115179   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:57.152711   68713 cri.go:89] found id: ""
	I0815 18:38:57.152737   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.152747   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:57.152755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:57.152819   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:57.190000   68713 cri.go:89] found id: ""
	I0815 18:38:57.190034   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.190042   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:57.190048   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:57.190096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:57.224947   68713 cri.go:89] found id: ""
	I0815 18:38:57.224978   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.224990   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:57.224998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:57.225060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:57.262329   68713 cri.go:89] found id: ""
	I0815 18:38:57.262365   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.262375   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:57.262383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:57.262458   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:57.299471   68713 cri.go:89] found id: ""
	I0815 18:38:57.299498   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.299507   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:57.299513   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:57.299572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:57.357163   68713 cri.go:89] found id: ""
	I0815 18:38:57.357202   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.357211   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:57.357220   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:57.357236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.405154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:57.405184   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:57.459245   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:57.459277   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:57.473663   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:57.473699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:57.546670   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:57.546699   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:57.546715   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:54.348864   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:56.848276   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:58.849461   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:55.655015   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:58.158306   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:56.662722   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:59.159559   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:00.124455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:00.137985   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:00.138053   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:00.175201   68713 cri.go:89] found id: ""
	I0815 18:39:00.175231   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.175242   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:00.175250   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:00.175328   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:00.209376   68713 cri.go:89] found id: ""
	I0815 18:39:00.209406   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.209418   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:00.209426   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:00.209484   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:00.246860   68713 cri.go:89] found id: ""
	I0815 18:39:00.246889   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.246899   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:00.246906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:00.246965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:00.282787   68713 cri.go:89] found id: ""
	I0815 18:39:00.282814   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.282823   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:00.282829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:00.282875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:00.330227   68713 cri.go:89] found id: ""
	I0815 18:39:00.330256   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.330268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:00.330275   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:00.330338   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:00.363028   68713 cri.go:89] found id: ""
	I0815 18:39:00.363061   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.363072   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:00.363079   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:00.363169   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:00.400484   68713 cri.go:89] found id: ""
	I0815 18:39:00.400522   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.400533   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:00.400540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:00.400597   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:00.436187   68713 cri.go:89] found id: ""
	I0815 18:39:00.436225   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.436238   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:00.436252   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:00.436267   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:00.481960   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:00.481985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:00.535103   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:00.535138   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:00.548541   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:00.548568   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:00.619476   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:00.619505   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:00.619525   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:01.347916   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.349448   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:00.654384   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.155048   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:01.658374   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.658824   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.206473   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:03.222893   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:03.222967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:03.272249   68713 cri.go:89] found id: ""
	I0815 18:39:03.272275   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.272283   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:03.272291   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:03.272355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:03.336786   68713 cri.go:89] found id: ""
	I0815 18:39:03.336811   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.336819   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:03.336825   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:03.336884   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:03.375977   68713 cri.go:89] found id: ""
	I0815 18:39:03.376002   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.376011   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:03.376016   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:03.376063   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:03.410304   68713 cri.go:89] found id: ""
	I0815 18:39:03.410326   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.410335   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:03.410340   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:03.410403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:03.446147   68713 cri.go:89] found id: ""
	I0815 18:39:03.446176   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.446188   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:03.446195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:03.446256   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:03.482555   68713 cri.go:89] found id: ""
	I0815 18:39:03.482582   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.482591   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:03.482597   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:03.482654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:03.519477   68713 cri.go:89] found id: ""
	I0815 18:39:03.519503   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.519511   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:03.519517   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:03.519574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:03.556539   68713 cri.go:89] found id: ""
	I0815 18:39:03.556566   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.556577   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:03.556587   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:03.556602   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:03.610553   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:03.610593   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:03.625799   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:03.625827   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:03.697106   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:03.697132   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:03.697149   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:03.779089   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:03.779120   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:06.319280   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:06.333284   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:06.333355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:06.369696   68713 cri.go:89] found id: ""
	I0815 18:39:06.369719   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.369727   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:06.369732   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:06.369780   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:06.405023   68713 cri.go:89] found id: ""
	I0815 18:39:06.405046   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.405053   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:06.405059   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:06.405113   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:06.439948   68713 cri.go:89] found id: ""
	I0815 18:39:06.439974   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.439983   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:06.439989   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:06.440048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:06.475613   68713 cri.go:89] found id: ""
	I0815 18:39:06.475642   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.475654   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:06.475664   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:06.475723   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:06.510698   68713 cri.go:89] found id: ""
	I0815 18:39:06.510721   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.510729   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:06.510735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:06.510783   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:06.545831   68713 cri.go:89] found id: ""
	I0815 18:39:06.545861   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.545873   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:06.545880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:06.545940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:06.579027   68713 cri.go:89] found id: ""
	I0815 18:39:06.579053   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.579064   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:06.579072   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:06.579132   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:06.615308   68713 cri.go:89] found id: ""
	I0815 18:39:06.615339   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.615352   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:06.615371   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:06.615396   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:06.671523   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:06.671555   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:06.685556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:06.685580   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:06.765036   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:06.765059   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:06.765071   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:06.843412   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:06.843457   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:05.849018   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:07.849342   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:05.654854   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:07.654910   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:09.655240   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:06.158409   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:08.657902   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:10.658258   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:09.390799   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:09.404099   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:09.404160   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:09.439534   68713 cri.go:89] found id: ""
	I0815 18:39:09.439563   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.439582   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:09.439591   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:09.439654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:09.478933   68713 cri.go:89] found id: ""
	I0815 18:39:09.478963   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.478974   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:09.478982   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:09.479042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:09.514396   68713 cri.go:89] found id: ""
	I0815 18:39:09.514425   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.514436   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:09.514444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:09.514510   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:09.547749   68713 cri.go:89] found id: ""
	I0815 18:39:09.547775   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.547785   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:09.547793   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:09.547856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:09.583583   68713 cri.go:89] found id: ""
	I0815 18:39:09.583611   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.583623   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:09.583631   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:09.583695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:09.616530   68713 cri.go:89] found id: ""
	I0815 18:39:09.616560   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.616570   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:09.616576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:09.616641   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:09.655167   68713 cri.go:89] found id: ""
	I0815 18:39:09.655189   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.655199   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:09.655207   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:09.655263   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:09.691368   68713 cri.go:89] found id: ""
	I0815 18:39:09.691391   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.691398   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:09.691411   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:09.691426   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:09.740739   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:09.740770   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:09.755049   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:09.755074   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:09.825053   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:09.825080   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:09.825095   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:09.903036   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:09.903076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:12.441898   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:12.454637   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:12.454712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:12.494604   68713 cri.go:89] found id: ""
	I0815 18:39:12.494632   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.494640   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:12.494646   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:12.494699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:12.531587   68713 cri.go:89] found id: ""
	I0815 18:39:12.531631   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.531642   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:12.531649   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:12.531710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:12.564991   68713 cri.go:89] found id: ""
	I0815 18:39:12.565014   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.565021   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:12.565027   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:12.565096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:12.600667   68713 cri.go:89] found id: ""
	I0815 18:39:12.600698   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.600709   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:12.600715   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:12.600777   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:12.633658   68713 cri.go:89] found id: ""
	I0815 18:39:12.633681   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.633691   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:12.633698   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:12.633759   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:12.673709   68713 cri.go:89] found id: ""
	I0815 18:39:12.673730   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.673737   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:12.673743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:12.673790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:12.707353   68713 cri.go:89] found id: ""
	I0815 18:39:12.707378   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.707385   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:12.707390   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:12.707437   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:12.746926   68713 cri.go:89] found id: ""
	I0815 18:39:12.746949   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.746957   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:12.746965   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:12.746977   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:09.853116   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:12.348297   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:11.655347   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:14.154929   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:13.158257   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:15.158457   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:12.792154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:12.792180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:12.843933   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:12.843968   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:12.859583   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:12.859609   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:12.940856   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:12.940880   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:12.940895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:15.520265   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:15.533677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:15.533754   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:15.572109   68713 cri.go:89] found id: ""
	I0815 18:39:15.572135   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.572145   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:15.572153   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:15.572221   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:15.607442   68713 cri.go:89] found id: ""
	I0815 18:39:15.607472   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.607484   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:15.607492   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:15.607551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:15.642206   68713 cri.go:89] found id: ""
	I0815 18:39:15.642230   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.642238   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:15.642246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:15.642308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:15.677914   68713 cri.go:89] found id: ""
	I0815 18:39:15.677945   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.677956   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:15.677963   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:15.678049   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:15.714466   68713 cri.go:89] found id: ""
	I0815 18:39:15.714496   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.714504   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:15.714510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:15.714563   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:15.750961   68713 cri.go:89] found id: ""
	I0815 18:39:15.750987   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.750995   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:15.751002   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:15.751050   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:15.785399   68713 cri.go:89] found id: ""
	I0815 18:39:15.785434   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.785444   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:15.785450   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:15.785498   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:15.821547   68713 cri.go:89] found id: ""
	I0815 18:39:15.821571   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.821578   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:15.821586   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:15.821597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:15.875299   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:15.875329   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:15.890376   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:15.890408   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:15.957317   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:15.957337   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:15.957352   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:16.033952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:16.033997   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:14.349171   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:16.849292   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.850822   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:16.654572   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.656041   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:17.657984   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:19.658366   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.571953   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:18.584652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:18.584721   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:18.617043   68713 cri.go:89] found id: ""
	I0815 18:39:18.617066   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.617073   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:18.617079   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:18.617127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:18.651080   68713 cri.go:89] found id: ""
	I0815 18:39:18.651112   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.651123   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:18.651130   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:18.651187   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:18.686857   68713 cri.go:89] found id: ""
	I0815 18:39:18.686890   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.686901   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:18.686909   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:18.686975   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:18.719397   68713 cri.go:89] found id: ""
	I0815 18:39:18.719434   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.719444   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:18.719452   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:18.719521   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:18.758316   68713 cri.go:89] found id: ""
	I0815 18:39:18.758349   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.758360   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:18.758366   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:18.758435   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:18.791586   68713 cri.go:89] found id: ""
	I0815 18:39:18.791609   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.791617   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:18.791623   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:18.791690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:18.827905   68713 cri.go:89] found id: ""
	I0815 18:39:18.827929   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.827937   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:18.827945   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:18.828004   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:18.869371   68713 cri.go:89] found id: ""
	I0815 18:39:18.869404   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.869412   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:18.869420   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:18.869432   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:18.920124   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:18.920158   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:18.936137   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:18.936168   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:19.006877   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:19.006902   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:19.006913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:19.088909   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:19.088953   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:21.632734   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:21.647246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:21.647322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:21.685574   68713 cri.go:89] found id: ""
	I0815 18:39:21.685606   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.685614   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:21.685620   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:21.685676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:21.717073   68713 cri.go:89] found id: ""
	I0815 18:39:21.717112   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.717124   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:21.717133   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:21.717205   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:21.752072   68713 cri.go:89] found id: ""
	I0815 18:39:21.752101   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.752112   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:21.752120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:21.752182   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:21.786811   68713 cri.go:89] found id: ""
	I0815 18:39:21.786834   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.786842   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:21.786848   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:21.786893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:21.823694   68713 cri.go:89] found id: ""
	I0815 18:39:21.823719   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.823728   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:21.823733   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:21.823790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:21.859358   68713 cri.go:89] found id: ""
	I0815 18:39:21.859387   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.859398   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:21.859405   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:21.859469   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:21.893723   68713 cri.go:89] found id: ""
	I0815 18:39:21.893751   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.893761   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:21.893769   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:21.893826   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:21.929338   68713 cri.go:89] found id: ""
	I0815 18:39:21.929368   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.929379   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:21.929388   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:21.929414   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:21.979107   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:21.979141   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:21.993968   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:21.994005   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:22.063359   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:22.063384   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:22.063398   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:22.144303   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:22.144337   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:21.348202   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.349199   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:21.154244   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.155954   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:21.658572   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.658782   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:25.658946   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:24.688104   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:24.701230   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:24.701298   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:24.735056   68713 cri.go:89] found id: ""
	I0815 18:39:24.735086   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.735097   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:24.735104   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:24.735172   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:24.769704   68713 cri.go:89] found id: ""
	I0815 18:39:24.769732   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.769743   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:24.769751   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:24.769812   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:24.808763   68713 cri.go:89] found id: ""
	I0815 18:39:24.808788   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.808796   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:24.808807   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:24.808856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:24.846997   68713 cri.go:89] found id: ""
	I0815 18:39:24.847028   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.847038   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:24.847045   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:24.847106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:24.881681   68713 cri.go:89] found id: ""
	I0815 18:39:24.881705   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.881713   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:24.881719   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:24.881772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:24.917000   68713 cri.go:89] found id: ""
	I0815 18:39:24.917024   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.917032   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:24.917040   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:24.917088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:24.951133   68713 cri.go:89] found id: ""
	I0815 18:39:24.951156   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.951164   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:24.951170   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:24.951218   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:24.987306   68713 cri.go:89] found id: ""
	I0815 18:39:24.987331   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.987339   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:24.987347   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:24.987360   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:25.039533   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:25.039566   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:25.053011   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:25.053036   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:25.125864   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:25.125884   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:25.125895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:25.209885   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:25.209916   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:27.751681   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:27.765316   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:27.765390   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:25.848840   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.849344   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:25.156068   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.654722   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:28.158317   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:30.158632   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.805820   68713 cri.go:89] found id: ""
	I0815 18:39:27.805858   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.805870   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:27.805878   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:27.805940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:27.846684   68713 cri.go:89] found id: ""
	I0815 18:39:27.846717   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.846727   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:27.846737   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:27.846801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:27.882326   68713 cri.go:89] found id: ""
	I0815 18:39:27.882358   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.882370   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:27.882378   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:27.882448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:27.917340   68713 cri.go:89] found id: ""
	I0815 18:39:27.917416   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.917431   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:27.917442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:27.917505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:27.952674   68713 cri.go:89] found id: ""
	I0815 18:39:27.952700   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.952708   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:27.952714   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:27.952763   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:27.986103   68713 cri.go:89] found id: ""
	I0815 18:39:27.986132   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.986143   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:27.986151   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:27.986212   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:28.023674   68713 cri.go:89] found id: ""
	I0815 18:39:28.023716   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.023735   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:28.023742   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:28.023807   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:28.064902   68713 cri.go:89] found id: ""
	I0815 18:39:28.064929   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.064937   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:28.064945   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:28.064957   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:28.116145   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:28.116180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:28.130435   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:28.130462   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:28.204899   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:28.204920   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:28.204931   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:28.284165   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:28.284202   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:30.824135   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:30.837515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:30.837583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:30.874671   68713 cri.go:89] found id: ""
	I0815 18:39:30.874695   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.874705   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:30.874712   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:30.874776   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:30.909990   68713 cri.go:89] found id: ""
	I0815 18:39:30.910027   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.910038   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:30.910045   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:30.910106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:30.946824   68713 cri.go:89] found id: ""
	I0815 18:39:30.946851   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.946859   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:30.946865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:30.946912   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:30.983392   68713 cri.go:89] found id: ""
	I0815 18:39:30.983419   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.983429   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:30.983437   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:30.983505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:31.023471   68713 cri.go:89] found id: ""
	I0815 18:39:31.023500   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.023510   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:31.023518   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:31.023583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:31.063586   68713 cri.go:89] found id: ""
	I0815 18:39:31.063616   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.063627   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:31.063636   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:31.063696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:31.103147   68713 cri.go:89] found id: ""
	I0815 18:39:31.103173   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.103180   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:31.103186   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:31.103237   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:31.144082   68713 cri.go:89] found id: ""
	I0815 18:39:31.144113   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.144124   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:31.144136   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:31.144150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:31.212535   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:31.212563   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:31.212586   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:31.292039   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:31.292076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:31.335023   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:31.335050   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:31.388817   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:31.388853   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:30.349110   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.349209   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:30.154683   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.653806   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:34.654716   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.658245   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:34.659119   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:33.925861   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:33.939604   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:33.939668   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:33.974538   68713 cri.go:89] found id: ""
	I0815 18:39:33.974563   68713 logs.go:276] 0 containers: []
	W0815 18:39:33.974571   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:33.974577   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:33.974634   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:34.009017   68713 cri.go:89] found id: ""
	I0815 18:39:34.009048   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.009058   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:34.009064   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:34.009120   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:34.049478   68713 cri.go:89] found id: ""
	I0815 18:39:34.049501   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.049517   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:34.049523   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:34.049576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:34.091011   68713 cri.go:89] found id: ""
	I0815 18:39:34.091040   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.091050   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:34.091056   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:34.091114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:34.126617   68713 cri.go:89] found id: ""
	I0815 18:39:34.126640   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.126650   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:34.126657   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:34.126706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:34.168140   68713 cri.go:89] found id: ""
	I0815 18:39:34.168169   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.168179   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:34.168187   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:34.168279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:34.205052   68713 cri.go:89] found id: ""
	I0815 18:39:34.205081   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.205091   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:34.205098   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:34.205173   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:34.238474   68713 cri.go:89] found id: ""
	I0815 18:39:34.238499   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.238506   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:34.238521   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:34.238540   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:34.280574   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:34.280601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:34.332662   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:34.332704   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:34.348556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:34.348591   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:34.421428   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:34.421450   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:34.421464   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.004855   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:37.019306   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:37.019378   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:37.057588   68713 cri.go:89] found id: ""
	I0815 18:39:37.057618   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.057626   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:37.057641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:37.057706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:37.095645   68713 cri.go:89] found id: ""
	I0815 18:39:37.095678   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.095687   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:37.095693   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:37.095750   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:37.131669   68713 cri.go:89] found id: ""
	I0815 18:39:37.131696   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.131711   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:37.131717   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:37.131772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:37.185065   68713 cri.go:89] found id: ""
	I0815 18:39:37.185097   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.185108   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:37.185115   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:37.185180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:37.220220   68713 cri.go:89] found id: ""
	I0815 18:39:37.220251   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.220262   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:37.220269   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:37.220322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:37.259816   68713 cri.go:89] found id: ""
	I0815 18:39:37.259849   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.259859   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:37.259868   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:37.259920   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:37.292777   68713 cri.go:89] found id: ""
	I0815 18:39:37.292807   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.292818   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:37.292825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:37.292888   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:37.328673   68713 cri.go:89] found id: ""
	I0815 18:39:37.328703   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.328714   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:37.328725   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:37.328740   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:37.379131   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:37.379172   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:37.392982   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:37.393017   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:37.470727   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:37.470750   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:37.470766   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.552353   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:37.552386   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:34.849108   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:37.349765   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:36.655101   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:39.154433   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:37.158746   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:39.658907   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:40.094008   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:40.107681   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:40.107753   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:40.142229   68713 cri.go:89] found id: ""
	I0815 18:39:40.142254   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.142264   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:40.142271   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:40.142333   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:40.180622   68713 cri.go:89] found id: ""
	I0815 18:39:40.180650   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.180665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:40.180672   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:40.180733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:40.219085   68713 cri.go:89] found id: ""
	I0815 18:39:40.219113   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.219120   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:40.219126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:40.219174   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:40.254807   68713 cri.go:89] found id: ""
	I0815 18:39:40.254838   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.254850   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:40.254858   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:40.254940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:40.290438   68713 cri.go:89] found id: ""
	I0815 18:39:40.290466   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.290478   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:40.290484   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:40.290547   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:40.326320   68713 cri.go:89] found id: ""
	I0815 18:39:40.326356   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.326364   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:40.326370   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:40.326429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:40.361538   68713 cri.go:89] found id: ""
	I0815 18:39:40.361563   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.361570   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:40.361576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:40.361629   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:40.397275   68713 cri.go:89] found id: ""
	I0815 18:39:40.397304   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.397316   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:40.397326   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:40.397342   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:40.466042   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:40.466064   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:40.466078   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:40.544915   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:40.544951   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:40.584992   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:40.585019   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:40.634792   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:40.634837   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:39.848609   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:41.849831   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:41.655153   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:43.655372   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:42.159650   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:44.658547   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:43.149819   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:43.164578   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:43.164649   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:43.199576   68713 cri.go:89] found id: ""
	I0815 18:39:43.199621   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.199633   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:43.199641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:43.199702   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:43.233783   68713 cri.go:89] found id: ""
	I0815 18:39:43.233820   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.233833   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:43.233840   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:43.233911   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:43.269406   68713 cri.go:89] found id: ""
	I0815 18:39:43.269437   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.269449   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:43.269457   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:43.269570   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:43.310686   68713 cri.go:89] found id: ""
	I0815 18:39:43.310715   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.310726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:43.310734   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:43.310795   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:43.348662   68713 cri.go:89] found id: ""
	I0815 18:39:43.348689   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.348699   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:43.348706   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:43.348767   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:43.385676   68713 cri.go:89] found id: ""
	I0815 18:39:43.385714   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.385726   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:43.385737   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:43.385802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:43.422605   68713 cri.go:89] found id: ""
	I0815 18:39:43.422634   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.422645   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:43.422653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:43.422712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:43.463208   68713 cri.go:89] found id: ""
	I0815 18:39:43.463238   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.463249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:43.463260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:43.463278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:43.476637   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:43.476664   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:43.552239   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:43.552263   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:43.552278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:43.653055   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:43.653108   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:43.699166   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:43.699192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.251725   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:46.265164   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:46.265240   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:46.305095   68713 cri.go:89] found id: ""
	I0815 18:39:46.305123   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.305133   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:46.305140   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:46.305196   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:46.349744   68713 cri.go:89] found id: ""
	I0815 18:39:46.349773   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.349783   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:46.349790   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:46.349858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:46.385807   68713 cri.go:89] found id: ""
	I0815 18:39:46.385839   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.385847   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:46.385853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:46.385908   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:46.419977   68713 cri.go:89] found id: ""
	I0815 18:39:46.420011   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.420024   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:46.420031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:46.420093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:46.454852   68713 cri.go:89] found id: ""
	I0815 18:39:46.454883   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.454894   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:46.454901   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:46.454962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:46.497157   68713 cri.go:89] found id: ""
	I0815 18:39:46.497192   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.497202   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:46.497210   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:46.497271   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:46.530191   68713 cri.go:89] found id: ""
	I0815 18:39:46.530218   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.530226   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:46.530232   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:46.530282   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:46.566024   68713 cri.go:89] found id: ""
	I0815 18:39:46.566050   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.566063   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:46.566074   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:46.566103   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.621969   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:46.622004   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:46.636576   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:46.636603   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:46.706819   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:46.706842   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:46.706857   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:46.786589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:46.786634   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:44.352685   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:46.849090   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:48.849424   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:45.655900   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:48.154862   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:46.658710   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:49.157317   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:49.324853   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:49.343543   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:49.343618   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:49.396260   68713 cri.go:89] found id: ""
	I0815 18:39:49.396292   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.396303   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:49.396311   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:49.396380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:49.437579   68713 cri.go:89] found id: ""
	I0815 18:39:49.437604   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.437612   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:49.437617   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:49.437663   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:49.476206   68713 cri.go:89] found id: ""
	I0815 18:39:49.476232   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.476239   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:49.476245   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:49.476296   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:49.511324   68713 cri.go:89] found id: ""
	I0815 18:39:49.511349   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.511357   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:49.511363   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:49.511428   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:49.545875   68713 cri.go:89] found id: ""
	I0815 18:39:49.545907   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.545916   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:49.545922   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:49.545981   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:49.582176   68713 cri.go:89] found id: ""
	I0815 18:39:49.582204   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.582228   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:49.582246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:49.582309   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:49.623288   68713 cri.go:89] found id: ""
	I0815 18:39:49.623318   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.623327   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:49.623333   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:49.623394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:49.662352   68713 cri.go:89] found id: ""
	I0815 18:39:49.662377   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.662389   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:49.662399   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:49.662424   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:49.745582   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:49.745617   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:49.785256   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:49.785295   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:49.835944   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:49.835979   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:49.852859   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:49.852886   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:49.928427   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.429223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:52.442384   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:52.442460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:52.480515   68713 cri.go:89] found id: ""
	I0815 18:39:52.480543   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.480553   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:52.480558   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:52.480605   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:52.518346   68713 cri.go:89] found id: ""
	I0815 18:39:52.518382   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.518393   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:52.518401   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:52.518460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:52.557696   68713 cri.go:89] found id: ""
	I0815 18:39:52.557722   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.557731   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:52.557736   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:52.557786   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:52.590849   68713 cri.go:89] found id: ""
	I0815 18:39:52.590879   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.590890   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:52.590898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:52.590961   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:52.629950   68713 cri.go:89] found id: ""
	I0815 18:39:52.629980   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.629992   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:52.629999   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:52.630047   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:52.666039   68713 cri.go:89] found id: ""
	I0815 18:39:52.666070   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.666081   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:52.666089   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:52.666146   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:52.699917   68713 cri.go:89] found id: ""
	I0815 18:39:52.699941   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.699949   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:52.699955   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:52.700001   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:52.735944   68713 cri.go:89] found id: ""
	I0815 18:39:52.735973   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.735981   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:52.735989   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:52.736001   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:39:50.849633   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:52.850298   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:50.155118   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:52.155166   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:54.653844   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:51.159401   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:53.658513   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	W0815 18:39:52.805519   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.805537   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:52.805559   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:52.894175   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:52.894213   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:52.932974   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:52.933006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:52.984206   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:52.984244   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.498477   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:55.511319   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:55.511380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:55.544899   68713 cri.go:89] found id: ""
	I0815 18:39:55.544928   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.544936   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:55.544943   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:55.545003   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:55.578821   68713 cri.go:89] found id: ""
	I0815 18:39:55.578855   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.578864   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:55.578869   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:55.578922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:55.615392   68713 cri.go:89] found id: ""
	I0815 18:39:55.615422   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.615434   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:55.615441   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:55.615501   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:55.653456   68713 cri.go:89] found id: ""
	I0815 18:39:55.653482   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.653493   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:55.653500   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:55.653558   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:55.687716   68713 cri.go:89] found id: ""
	I0815 18:39:55.687741   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.687749   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:55.687755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:55.687802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:55.725518   68713 cri.go:89] found id: ""
	I0815 18:39:55.725543   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.725553   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:55.725561   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:55.725631   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:55.758451   68713 cri.go:89] found id: ""
	I0815 18:39:55.758479   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.758490   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:55.758498   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:55.758560   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:55.792653   68713 cri.go:89] found id: ""
	I0815 18:39:55.792680   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.792687   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:55.792699   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:55.792710   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:55.832127   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:55.832156   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:55.885255   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:55.885289   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.898980   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:55.899009   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:55.967579   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:55.967609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:55.967624   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:55.348998   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:57.349656   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:56.654840   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.655471   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:56.158348   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.658194   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:00.658852   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.543524   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:58.556338   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:58.556412   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:58.593359   68713 cri.go:89] found id: ""
	I0815 18:39:58.593390   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.593401   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:58.593409   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:58.593472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:58.628446   68713 cri.go:89] found id: ""
	I0815 18:39:58.628471   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.628481   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:58.628504   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:58.628567   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:58.663930   68713 cri.go:89] found id: ""
	I0815 18:39:58.663954   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.663964   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:58.663971   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:58.664028   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:58.701070   68713 cri.go:89] found id: ""
	I0815 18:39:58.701095   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.701103   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:58.701108   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:58.701156   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:58.734427   68713 cri.go:89] found id: ""
	I0815 18:39:58.734457   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.734468   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:58.734476   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:58.734543   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:58.769121   68713 cri.go:89] found id: ""
	I0815 18:39:58.769144   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.769152   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:58.769162   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:58.769215   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:58.805771   68713 cri.go:89] found id: ""
	I0815 18:39:58.805796   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.805803   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:58.805808   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:58.805856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:58.840288   68713 cri.go:89] found id: ""
	I0815 18:39:58.840315   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.840325   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:58.840336   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:58.840351   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:58.895856   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:58.895893   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:58.909453   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:58.909478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:58.975939   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:58.975960   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:58.975971   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:59.055318   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:59.055353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.595588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:01.608625   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:01.608690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:01.646105   68713 cri.go:89] found id: ""
	I0815 18:40:01.646133   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.646144   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:01.646151   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:01.646214   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:01.685162   68713 cri.go:89] found id: ""
	I0815 18:40:01.685192   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.685202   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:01.685210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:01.685261   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:01.721452   68713 cri.go:89] found id: ""
	I0815 18:40:01.721479   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.721499   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:01.721507   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:01.721576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:01.762288   68713 cri.go:89] found id: ""
	I0815 18:40:01.762318   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.762331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:01.762339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:01.762429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:01.800547   68713 cri.go:89] found id: ""
	I0815 18:40:01.800579   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.800590   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:01.800598   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:01.800660   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:01.839182   68713 cri.go:89] found id: ""
	I0815 18:40:01.839214   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.839223   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:01.839229   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:01.839294   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:01.875364   68713 cri.go:89] found id: ""
	I0815 18:40:01.875390   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.875398   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:01.875404   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:01.875452   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:01.910485   68713 cri.go:89] found id: ""
	I0815 18:40:01.910512   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.910521   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:01.910535   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:01.910547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.951970   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:01.951998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:02.005720   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:02.005764   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:02.020941   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:02.020969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:02.101206   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:02.101224   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:02.101236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:59.850909   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:02.349180   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:00.659366   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:03.153614   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:03.158375   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:05.159868   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:04.687482   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:04.701501   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:04.701562   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:04.739613   68713 cri.go:89] found id: ""
	I0815 18:40:04.739636   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.739644   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:04.739650   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:04.739704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:04.774419   68713 cri.go:89] found id: ""
	I0815 18:40:04.774443   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.774453   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:04.774460   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:04.774522   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:04.809516   68713 cri.go:89] found id: ""
	I0815 18:40:04.809538   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.809547   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:04.809552   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:04.809612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:04.843822   68713 cri.go:89] found id: ""
	I0815 18:40:04.843850   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.843870   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:04.843878   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:04.843942   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:04.883853   68713 cri.go:89] found id: ""
	I0815 18:40:04.883881   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.883892   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:04.883900   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:04.883962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:04.918811   68713 cri.go:89] found id: ""
	I0815 18:40:04.918838   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.918846   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:04.918852   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:04.918903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:04.953076   68713 cri.go:89] found id: ""
	I0815 18:40:04.953101   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.953110   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:04.953116   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:04.953163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:04.988219   68713 cri.go:89] found id: ""
	I0815 18:40:04.988246   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.988255   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:04.988264   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:04.988275   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:05.060859   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:05.060896   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:05.060913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:05.146768   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:05.146817   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:05.187816   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:05.187845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:05.239027   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:05.239067   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:07.754503   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:07.769608   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:07.769695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:04.849108   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:06.850409   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:05.155042   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.654547   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:09.654825   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.658972   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:10.159255   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.804435   68713 cri.go:89] found id: ""
	I0815 18:40:07.804460   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.804468   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:07.804474   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:07.804551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:07.839760   68713 cri.go:89] found id: ""
	I0815 18:40:07.839787   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.839797   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:07.839804   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:07.839868   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:07.877984   68713 cri.go:89] found id: ""
	I0815 18:40:07.878009   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.878017   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:07.878022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:07.878070   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:07.914294   68713 cri.go:89] found id: ""
	I0815 18:40:07.914319   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.914328   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:07.914336   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:07.914395   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:07.948751   68713 cri.go:89] found id: ""
	I0815 18:40:07.948777   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.948787   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:07.948795   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:07.948861   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:07.982262   68713 cri.go:89] found id: ""
	I0815 18:40:07.982288   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.982296   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:07.982302   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:07.982358   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:08.015560   68713 cri.go:89] found id: ""
	I0815 18:40:08.015588   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.015596   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:08.015602   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:08.015662   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:08.049854   68713 cri.go:89] found id: ""
	I0815 18:40:08.049878   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.049885   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:08.049893   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:08.049905   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:08.102269   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:08.102303   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:08.117181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:08.117209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:08.188586   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:08.188609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:08.188623   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:08.272204   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:08.272239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:10.813223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:10.826181   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:10.826257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:10.863728   68713 cri.go:89] found id: ""
	I0815 18:40:10.863753   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.863761   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:10.863766   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:10.863813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:10.898074   68713 cri.go:89] found id: ""
	I0815 18:40:10.898102   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.898113   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:10.898121   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:10.898183   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:10.933948   68713 cri.go:89] found id: ""
	I0815 18:40:10.933980   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.933991   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:10.933998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:10.934059   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:10.972402   68713 cri.go:89] found id: ""
	I0815 18:40:10.972428   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.972436   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:10.972442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:10.972509   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:11.006814   68713 cri.go:89] found id: ""
	I0815 18:40:11.006843   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.006851   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:11.006857   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:11.006909   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:11.042739   68713 cri.go:89] found id: ""
	I0815 18:40:11.042763   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.042771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:11.042777   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:11.042835   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:11.079132   68713 cri.go:89] found id: ""
	I0815 18:40:11.079164   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.079173   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:11.079179   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:11.079228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:11.113271   68713 cri.go:89] found id: ""
	I0815 18:40:11.113298   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.113309   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:11.113317   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:11.113328   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:11.166669   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:11.166698   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:11.180789   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:11.180815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:11.247954   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:11.247985   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:11.247999   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:11.331952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:11.331995   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:09.349194   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:11.349627   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.850439   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:11.655088   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.656674   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:12.658287   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:15.158361   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.874466   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:13.888346   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:13.888416   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:13.922542   68713 cri.go:89] found id: ""
	I0815 18:40:13.922569   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.922579   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:13.922586   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:13.922654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:13.958039   68713 cri.go:89] found id: ""
	I0815 18:40:13.958066   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.958076   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:13.958082   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:13.958131   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:13.994095   68713 cri.go:89] found id: ""
	I0815 18:40:13.994125   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.994136   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:13.994144   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:13.994195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:14.027918   68713 cri.go:89] found id: ""
	I0815 18:40:14.027949   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.027960   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:14.027969   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:14.028027   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:14.063849   68713 cri.go:89] found id: ""
	I0815 18:40:14.063879   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.063889   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:14.063897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:14.063957   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:14.098444   68713 cri.go:89] found id: ""
	I0815 18:40:14.098473   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.098483   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:14.098490   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:14.098553   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:14.136834   68713 cri.go:89] found id: ""
	I0815 18:40:14.136861   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.136874   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:14.136880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:14.136925   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:14.172377   68713 cri.go:89] found id: ""
	I0815 18:40:14.172400   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.172408   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:14.172415   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:14.172430   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:14.212212   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:14.212242   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:14.268412   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:14.268450   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:14.282978   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:14.283006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:14.352777   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:14.352796   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:14.352822   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:16.939906   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:16.953118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:16.953178   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:16.991697   68713 cri.go:89] found id: ""
	I0815 18:40:16.991723   68713 logs.go:276] 0 containers: []
	W0815 18:40:16.991731   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:16.991736   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:16.991801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:17.027572   68713 cri.go:89] found id: ""
	I0815 18:40:17.027602   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.027613   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:17.027623   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:17.027682   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:17.060718   68713 cri.go:89] found id: ""
	I0815 18:40:17.060750   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.060763   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:17.060771   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:17.060829   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:17.096746   68713 cri.go:89] found id: ""
	I0815 18:40:17.096771   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.096780   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:17.096786   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:17.096846   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:17.130755   68713 cri.go:89] found id: ""
	I0815 18:40:17.130791   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.130802   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:17.130810   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:17.130872   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:17.167991   68713 cri.go:89] found id: ""
	I0815 18:40:17.168016   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.168026   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:17.168034   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:17.168093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:17.200695   68713 cri.go:89] found id: ""
	I0815 18:40:17.200722   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.200733   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:17.200741   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:17.200799   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:17.237788   68713 cri.go:89] found id: ""
	I0815 18:40:17.237816   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.237824   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:17.237833   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:17.237848   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:17.288888   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:17.288921   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:17.302862   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:17.302903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:17.370062   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:17.370085   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:17.370100   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:17.444742   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:17.444781   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:16.349749   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:18.849197   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:16.155555   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:18.654875   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:17.160009   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:19.657774   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:19.984813   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:19.998010   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:19.998077   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:20.032880   68713 cri.go:89] found id: ""
	I0815 18:40:20.032903   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.032912   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:20.032918   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:20.032973   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:20.069191   68713 cri.go:89] found id: ""
	I0815 18:40:20.069224   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.069236   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:20.069243   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:20.069301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:20.101930   68713 cri.go:89] found id: ""
	I0815 18:40:20.101954   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.101962   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:20.101968   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:20.102016   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:20.136981   68713 cri.go:89] found id: ""
	I0815 18:40:20.137006   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.137014   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:20.137020   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:20.137066   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:20.174517   68713 cri.go:89] found id: ""
	I0815 18:40:20.174543   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.174550   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:20.174556   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:20.174611   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:20.208525   68713 cri.go:89] found id: ""
	I0815 18:40:20.208549   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.208559   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:20.208567   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:20.208626   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:20.240824   68713 cri.go:89] found id: ""
	I0815 18:40:20.240855   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.240867   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:20.240874   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:20.240946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:20.277683   68713 cri.go:89] found id: ""
	I0815 18:40:20.277710   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.277720   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:20.277728   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:20.277739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:20.324271   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:20.324304   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:20.376250   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:20.376285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:20.392777   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:20.392813   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:20.464122   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:20.464156   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:20.464180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:20.849461   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:22.849591   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:20.654982   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.154537   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:21.658354   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.658505   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.041684   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:23.055779   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:23.055858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:23.095391   68713 cri.go:89] found id: ""
	I0815 18:40:23.095414   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.095426   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:23.095432   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:23.095483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:23.134907   68713 cri.go:89] found id: ""
	I0815 18:40:23.134936   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.134943   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:23.134949   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:23.134994   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:23.171806   68713 cri.go:89] found id: ""
	I0815 18:40:23.171845   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.171854   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:23.171861   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:23.171924   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:23.205378   68713 cri.go:89] found id: ""
	I0815 18:40:23.205404   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.205412   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:23.205417   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:23.205467   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:23.239503   68713 cri.go:89] found id: ""
	I0815 18:40:23.239531   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.239540   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:23.239547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:23.239614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:23.275802   68713 cri.go:89] found id: ""
	I0815 18:40:23.275828   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.275842   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:23.275849   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:23.275894   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:23.310127   68713 cri.go:89] found id: ""
	I0815 18:40:23.310154   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.310167   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:23.310173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:23.310219   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:23.344646   68713 cri.go:89] found id: ""
	I0815 18:40:23.344674   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.344685   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:23.344696   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:23.344711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:23.397260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:23.397310   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:23.425518   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:23.425553   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:23.495528   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:23.495547   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:23.495562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:23.574489   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:23.574524   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.119044   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:26.133806   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:26.133880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:26.175683   68713 cri.go:89] found id: ""
	I0815 18:40:26.175711   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.175722   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:26.175730   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:26.175789   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:26.210634   68713 cri.go:89] found id: ""
	I0815 18:40:26.210658   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.210665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:26.210671   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:26.210724   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:26.244146   68713 cri.go:89] found id: ""
	I0815 18:40:26.244176   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.244187   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:26.244195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:26.244274   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:26.277312   68713 cri.go:89] found id: ""
	I0815 18:40:26.277335   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.277343   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:26.277349   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:26.277410   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:26.311538   68713 cri.go:89] found id: ""
	I0815 18:40:26.311562   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.311570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:26.311576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:26.311623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:26.347816   68713 cri.go:89] found id: ""
	I0815 18:40:26.347840   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.347847   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:26.347853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:26.347906   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:26.381211   68713 cri.go:89] found id: ""
	I0815 18:40:26.381234   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.381242   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:26.381248   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:26.381303   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:26.413982   68713 cri.go:89] found id: ""
	I0815 18:40:26.414010   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.414018   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:26.414027   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:26.414038   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:26.500686   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:26.500721   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.537615   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:26.537642   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:26.590119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:26.590150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:26.603713   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:26.603739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:26.675455   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:25.349400   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:27.853388   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:25.155463   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:27.155580   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:29.156973   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:26.158898   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:28.658576   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:29.176084   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:29.189743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:29.189813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:29.225500   68713 cri.go:89] found id: ""
	I0815 18:40:29.225536   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.225548   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:29.225557   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:29.225614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:29.261839   68713 cri.go:89] found id: ""
	I0815 18:40:29.261866   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.261877   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:29.261884   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:29.261946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:29.296685   68713 cri.go:89] found id: ""
	I0815 18:40:29.296708   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.296716   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:29.296728   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:29.296787   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:29.332524   68713 cri.go:89] found id: ""
	I0815 18:40:29.332550   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.332558   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:29.332564   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:29.332615   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:29.368918   68713 cri.go:89] found id: ""
	I0815 18:40:29.368943   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.368953   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:29.368961   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:29.369020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:29.403175   68713 cri.go:89] found id: ""
	I0815 18:40:29.403200   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.403211   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:29.403218   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:29.403279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:29.438957   68713 cri.go:89] found id: ""
	I0815 18:40:29.438981   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.438989   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:29.438994   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:29.439051   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:29.472153   68713 cri.go:89] found id: ""
	I0815 18:40:29.472184   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.472195   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:29.472206   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:29.472221   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:29.560484   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:29.560547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:29.600366   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:29.600402   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:29.656536   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:29.656569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:29.669899   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:29.669925   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:29.738515   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:32.239207   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:32.253976   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:32.254048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:32.290918   68713 cri.go:89] found id: ""
	I0815 18:40:32.290942   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.290951   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:32.290957   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:32.291009   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:32.325567   68713 cri.go:89] found id: ""
	I0815 18:40:32.325596   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.325606   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:32.325613   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:32.325674   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:32.360959   68713 cri.go:89] found id: ""
	I0815 18:40:32.360994   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.361005   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:32.361015   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:32.361090   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:32.398583   68713 cri.go:89] found id: ""
	I0815 18:40:32.398614   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.398625   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:32.398633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:32.398696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:32.432980   68713 cri.go:89] found id: ""
	I0815 18:40:32.433007   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.433017   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:32.433024   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:32.433088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:32.467645   68713 cri.go:89] found id: ""
	I0815 18:40:32.467678   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.467688   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:32.467697   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:32.467757   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:32.504233   68713 cri.go:89] found id: ""
	I0815 18:40:32.504265   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.504275   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:32.504282   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:32.504347   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:32.539127   68713 cri.go:89] found id: ""
	I0815 18:40:32.539160   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.539175   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:32.539186   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:32.539200   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:32.620782   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:32.620818   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:32.660920   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:32.660950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:32.714392   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:32.714425   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:32.727629   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:32.727655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:40:30.349267   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:32.349896   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:31.655451   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:34.154871   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:31.157219   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:33.158733   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:35.158871   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	W0815 18:40:32.801258   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.301393   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:35.315460   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:35.315515   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:35.352266   68713 cri.go:89] found id: ""
	I0815 18:40:35.352287   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.352295   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:35.352301   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:35.352345   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:35.387274   68713 cri.go:89] found id: ""
	I0815 18:40:35.387305   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.387316   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:35.387324   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:35.387386   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:35.422376   68713 cri.go:89] found id: ""
	I0815 18:40:35.422403   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.422413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:35.422419   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:35.422464   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:35.456423   68713 cri.go:89] found id: ""
	I0815 18:40:35.456452   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.456459   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:35.456465   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:35.456544   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:35.494878   68713 cri.go:89] found id: ""
	I0815 18:40:35.494903   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.494912   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:35.494919   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:35.494980   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:35.528027   68713 cri.go:89] found id: ""
	I0815 18:40:35.528051   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.528062   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:35.528069   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:35.528128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:35.568543   68713 cri.go:89] found id: ""
	I0815 18:40:35.568570   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.568580   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:35.568587   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:35.568654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:35.627717   68713 cri.go:89] found id: ""
	I0815 18:40:35.627747   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.627766   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:35.627777   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:35.627792   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:35.691497   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:35.691530   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:35.705062   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:35.705092   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:35.783785   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.783806   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:35.783819   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:35.867282   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:35.867317   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:34.848226   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:36.849242   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.850686   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:36.154981   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.155165   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:37.659017   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:40.158408   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.407940   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:38.421571   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:38.421648   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:38.456551   68713 cri.go:89] found id: ""
	I0815 18:40:38.456586   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.456597   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:38.456604   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:38.456665   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:38.494133   68713 cri.go:89] found id: ""
	I0815 18:40:38.494167   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.494179   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:38.494186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:38.494253   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:38.531566   68713 cri.go:89] found id: ""
	I0815 18:40:38.531599   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.531610   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:38.531617   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:38.531678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:38.567613   68713 cri.go:89] found id: ""
	I0815 18:40:38.567640   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.567652   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:38.567659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:38.567717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:38.603172   68713 cri.go:89] found id: ""
	I0815 18:40:38.603201   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.603212   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:38.603225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:38.603284   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:38.639600   68713 cri.go:89] found id: ""
	I0815 18:40:38.639629   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.639640   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:38.639648   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:38.639710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:38.675780   68713 cri.go:89] found id: ""
	I0815 18:40:38.675811   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.675821   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:38.675828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:38.675885   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:38.708745   68713 cri.go:89] found id: ""
	I0815 18:40:38.708775   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.708786   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:38.708796   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:38.708815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:38.722485   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:38.722514   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:38.793913   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:38.793936   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:38.793950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:38.880706   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:38.880744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:38.919505   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:38.919533   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.472452   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:41.486204   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:41.486264   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:41.520251   68713 cri.go:89] found id: ""
	I0815 18:40:41.520282   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.520294   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:41.520302   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:41.520362   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:41.561294   68713 cri.go:89] found id: ""
	I0815 18:40:41.561325   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.561336   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:41.561343   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:41.561403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:41.595290   68713 cri.go:89] found id: ""
	I0815 18:40:41.595318   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.595326   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:41.595331   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:41.595381   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:41.629706   68713 cri.go:89] found id: ""
	I0815 18:40:41.629736   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.629744   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:41.629750   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:41.629816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:41.671862   68713 cri.go:89] found id: ""
	I0815 18:40:41.671885   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.671893   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:41.671898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:41.671951   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:41.710298   68713 cri.go:89] found id: ""
	I0815 18:40:41.710349   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.710360   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:41.710368   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:41.710425   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:41.745434   68713 cri.go:89] found id: ""
	I0815 18:40:41.745472   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.745487   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:41.745492   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:41.745548   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:41.781038   68713 cri.go:89] found id: ""
	I0815 18:40:41.781073   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.781081   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:41.781088   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:41.781099   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:41.863977   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:41.864023   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:41.907477   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:41.907505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.962921   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:41.962956   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:41.976458   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:41.976505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:42.044372   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:41.349260   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:43.349615   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:40.656633   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:43.154626   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:42.658519   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:44.659640   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:44.544803   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:44.559538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:44.559595   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:44.595471   68713 cri.go:89] found id: ""
	I0815 18:40:44.595501   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.595511   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:44.595518   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:44.595581   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:44.630148   68713 cri.go:89] found id: ""
	I0815 18:40:44.630173   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.630181   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:44.630189   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:44.630245   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:44.666084   68713 cri.go:89] found id: ""
	I0815 18:40:44.666110   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.666119   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:44.666126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:44.666180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:44.700286   68713 cri.go:89] found id: ""
	I0815 18:40:44.700320   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.700331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:44.700339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:44.700394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:44.734115   68713 cri.go:89] found id: ""
	I0815 18:40:44.734143   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.734151   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:44.734157   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:44.734216   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:44.770306   68713 cri.go:89] found id: ""
	I0815 18:40:44.770363   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.770376   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:44.770383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:44.770453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:44.806766   68713 cri.go:89] found id: ""
	I0815 18:40:44.806790   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.806798   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:44.806803   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:44.806865   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:44.843574   68713 cri.go:89] found id: ""
	I0815 18:40:44.843603   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.843613   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:44.843623   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:44.843638   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:44.896119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:44.896148   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:44.909537   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:44.909562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:44.980268   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:44.980290   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:44.980307   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:45.066589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:45.066626   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:47.605934   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:47.620644   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:47.620709   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:47.660939   68713 cri.go:89] found id: ""
	I0815 18:40:47.660960   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.660967   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:47.660973   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:47.661021   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:47.701018   68713 cri.go:89] found id: ""
	I0815 18:40:47.701047   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.701059   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:47.701107   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:47.701177   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:47.739487   68713 cri.go:89] found id: ""
	I0815 18:40:47.739514   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.739523   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:47.739528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:47.739584   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:47.781483   68713 cri.go:89] found id: ""
	I0815 18:40:47.781508   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.781515   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:47.781520   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:47.781571   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:45.850565   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.851368   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:45.156177   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.654437   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.157895   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:49.658101   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.816781   68713 cri.go:89] found id: ""
	I0815 18:40:47.816806   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.816813   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:47.816819   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:47.816875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:47.853951   68713 cri.go:89] found id: ""
	I0815 18:40:47.853976   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.853984   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:47.853990   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:47.854062   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:47.892208   68713 cri.go:89] found id: ""
	I0815 18:40:47.892237   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.892246   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:47.892252   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:47.892311   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:47.926916   68713 cri.go:89] found id: ""
	I0815 18:40:47.926944   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.926965   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:47.926976   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:47.926990   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:48.002907   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:48.002927   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:48.002942   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:48.085727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:48.085762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:48.127192   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:48.127224   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:48.180172   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:48.180208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:50.694573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:50.709411   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:50.709472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:50.750956   68713 cri.go:89] found id: ""
	I0815 18:40:50.750985   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.750994   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:50.751000   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:50.751048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:50.791072   68713 cri.go:89] found id: ""
	I0815 18:40:50.791149   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.791174   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:50.791186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:50.791247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:50.827692   68713 cri.go:89] found id: ""
	I0815 18:40:50.827717   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.827728   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:50.827735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:50.827794   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:50.866587   68713 cri.go:89] found id: ""
	I0815 18:40:50.866616   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.866626   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:50.866633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:50.866692   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:50.907012   68713 cri.go:89] found id: ""
	I0815 18:40:50.907040   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.907047   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:50.907053   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:50.907101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:50.951212   68713 cri.go:89] found id: ""
	I0815 18:40:50.951243   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.951256   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:50.951263   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:50.951316   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:50.989771   68713 cri.go:89] found id: ""
	I0815 18:40:50.989802   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.989812   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:50.989818   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:50.989867   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:51.024423   68713 cri.go:89] found id: ""
	I0815 18:40:51.024454   68713 logs.go:276] 0 containers: []
	W0815 18:40:51.024465   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:51.024475   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:51.024500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:51.076973   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:51.077012   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:51.090963   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:51.090989   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:51.169981   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:51.170005   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:51.170029   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:51.248990   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:51.249040   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:50.349092   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:52.350278   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:50.154517   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:52.148131   68248 pod_ready.go:82] duration metric: took 4m0.000077937s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" ...
	E0815 18:40:52.148161   68248 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 18:40:52.148183   68248 pod_ready.go:39] duration metric: took 4m13.224994468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:40:52.148235   68248 kubeadm.go:597] duration metric: took 4m20.945128985s to restartPrimaryControlPlane
	W0815 18:40:52.148324   68248 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 18:40:52.148376   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:40:51.660289   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:54.157718   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:53.790172   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:53.803752   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:53.803816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:53.843203   68713 cri.go:89] found id: ""
	I0815 18:40:53.843231   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.843246   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:53.843254   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:53.843314   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:53.878975   68713 cri.go:89] found id: ""
	I0815 18:40:53.879000   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.879008   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:53.879013   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:53.879078   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:53.915640   68713 cri.go:89] found id: ""
	I0815 18:40:53.915668   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.915675   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:53.915683   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:53.915746   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:53.956312   68713 cri.go:89] found id: ""
	I0815 18:40:53.956340   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.956356   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:53.956365   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:53.956426   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:53.992276   68713 cri.go:89] found id: ""
	I0815 18:40:53.992304   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.992314   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:53.992322   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:53.992387   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:54.034653   68713 cri.go:89] found id: ""
	I0815 18:40:54.034682   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.034693   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:54.034701   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:54.034761   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:54.072993   68713 cri.go:89] found id: ""
	I0815 18:40:54.073018   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.073027   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:54.073038   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:54.073107   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:54.107414   68713 cri.go:89] found id: ""
	I0815 18:40:54.107446   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.107456   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:54.107466   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:54.107481   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:54.145900   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:54.145928   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:54.197609   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:54.197639   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:54.211384   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:54.211410   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:54.280991   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:54.281018   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:54.281031   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:56.868270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:56.881168   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:56.881248   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:56.915206   68713 cri.go:89] found id: ""
	I0815 18:40:56.915235   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.915243   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:56.915249   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:56.915308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:56.950838   68713 cri.go:89] found id: ""
	I0815 18:40:56.950864   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.950873   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:56.950879   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:56.950937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:56.993625   68713 cri.go:89] found id: ""
	I0815 18:40:56.993649   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.993656   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:56.993662   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:56.993718   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:57.029109   68713 cri.go:89] found id: ""
	I0815 18:40:57.029139   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.029150   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:57.029158   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:57.029213   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:57.063480   68713 cri.go:89] found id: ""
	I0815 18:40:57.063518   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.063530   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:57.063538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:57.063598   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:57.102830   68713 cri.go:89] found id: ""
	I0815 18:40:57.102859   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.102870   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:57.102877   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:57.102938   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:57.137116   68713 cri.go:89] found id: ""
	I0815 18:40:57.137146   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.137159   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:57.137173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:57.137235   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:57.174678   68713 cri.go:89] found id: ""
	I0815 18:40:57.174706   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.174717   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:57.174727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:57.174741   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:57.213270   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:57.213311   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:57.269463   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:57.269500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:57.283891   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:57.283915   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:57.355563   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:57.355589   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:57.355601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:54.849266   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:57.350343   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:56.657843   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:58.658098   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:59.943493   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:59.957225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:59.957285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:59.993113   68713 cri.go:89] found id: ""
	I0815 18:40:59.993142   68713 logs.go:276] 0 containers: []
	W0815 18:40:59.993153   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:59.993167   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:59.993228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:00.033485   68713 cri.go:89] found id: ""
	I0815 18:41:00.033515   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.033525   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:00.033533   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:00.033594   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:00.070808   68713 cri.go:89] found id: ""
	I0815 18:41:00.070830   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.070838   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:00.070844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:00.070893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:00.113043   68713 cri.go:89] found id: ""
	I0815 18:41:00.113067   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.113076   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:00.113082   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:00.113139   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:00.148089   68713 cri.go:89] found id: ""
	I0815 18:41:00.148118   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.148129   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:00.148136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:00.148206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:00.188343   68713 cri.go:89] found id: ""
	I0815 18:41:00.188375   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.188386   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:00.188394   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:00.188448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:00.224287   68713 cri.go:89] found id: ""
	I0815 18:41:00.224312   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.224323   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:00.224337   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:00.224398   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:00.263983   68713 cri.go:89] found id: ""
	I0815 18:41:00.264008   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.264016   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:00.264025   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:00.264037   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:00.278057   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:00.278083   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:00.355112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:00.355133   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:00.355146   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:00.436636   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:00.436672   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:00.474774   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:00.474801   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:59.849797   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:02.349363   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:01.158004   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:03.158380   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:05.658860   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:03.027434   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:03.041422   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:03.041496   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:03.074093   68713 cri.go:89] found id: ""
	I0815 18:41:03.074119   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.074130   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:03.074138   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:03.074198   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:03.111489   68713 cri.go:89] found id: ""
	I0815 18:41:03.111517   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.111529   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:03.111537   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:03.111599   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:03.147716   68713 cri.go:89] found id: ""
	I0815 18:41:03.147747   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.147756   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:03.147762   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:03.147825   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:03.184609   68713 cri.go:89] found id: ""
	I0815 18:41:03.184635   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.184644   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:03.184652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:03.184710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:03.221839   68713 cri.go:89] found id: ""
	I0815 18:41:03.221869   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.221878   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:03.221883   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:03.221935   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:03.262619   68713 cri.go:89] found id: ""
	I0815 18:41:03.262649   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.262661   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:03.262669   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:03.262733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:03.297826   68713 cri.go:89] found id: ""
	I0815 18:41:03.297849   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.297864   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:03.297875   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:03.297922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:03.345046   68713 cri.go:89] found id: ""
	I0815 18:41:03.345074   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.345083   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:03.345095   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:03.345133   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:03.416878   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:03.416905   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:03.416920   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:03.491548   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:03.491583   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:03.533821   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:03.533852   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:03.587749   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:03.587787   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.104002   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:06.118123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:06.118195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:06.156179   68713 cri.go:89] found id: ""
	I0815 18:41:06.156204   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.156213   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:06.156218   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:06.156275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:06.192834   68713 cri.go:89] found id: ""
	I0815 18:41:06.192858   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.192866   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:06.192871   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:06.192918   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:06.228355   68713 cri.go:89] found id: ""
	I0815 18:41:06.228379   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.228387   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:06.228393   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:06.228453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:06.262041   68713 cri.go:89] found id: ""
	I0815 18:41:06.262068   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.262079   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:06.262086   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:06.262152   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:06.303217   68713 cri.go:89] found id: ""
	I0815 18:41:06.303249   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.303261   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:06.303268   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:06.303335   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:06.337180   68713 cri.go:89] found id: ""
	I0815 18:41:06.337208   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.337215   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:06.337222   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:06.337270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:06.375054   68713 cri.go:89] found id: ""
	I0815 18:41:06.375081   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.375088   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:06.375095   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:06.375163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:06.412188   68713 cri.go:89] found id: ""
	I0815 18:41:06.412216   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.412227   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:06.412239   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:06.412255   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.425607   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:06.425633   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:06.500853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:06.500872   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:06.500883   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:06.577297   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:06.577333   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:06.620209   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:06.620239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:04.848677   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:06.849254   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:08.849300   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:08.157734   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:10.157969   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:09.171606   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:09.184197   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:09.184257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:09.217865   68713 cri.go:89] found id: ""
	I0815 18:41:09.217893   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.217904   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:09.217912   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:09.217967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:09.254032   68713 cri.go:89] found id: ""
	I0815 18:41:09.254055   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.254064   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:09.254073   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:09.254128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:09.291772   68713 cri.go:89] found id: ""
	I0815 18:41:09.291798   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.291808   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:09.291816   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:09.291880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:09.326695   68713 cri.go:89] found id: ""
	I0815 18:41:09.326717   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.326726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:09.326731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:09.326791   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:09.365779   68713 cri.go:89] found id: ""
	I0815 18:41:09.365807   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.365818   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:09.365825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:09.365880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:09.413475   68713 cri.go:89] found id: ""
	I0815 18:41:09.413500   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.413509   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:09.413514   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:09.413578   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:09.449483   68713 cri.go:89] found id: ""
	I0815 18:41:09.449511   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.449521   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:09.449528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:09.449623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:09.487484   68713 cri.go:89] found id: ""
	I0815 18:41:09.487513   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.487525   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:09.487535   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:09.487549   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:09.536746   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:09.536777   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:09.549912   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:09.549944   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:09.619192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:09.619227   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:09.619246   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:09.698370   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:09.698404   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:12.240745   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:12.254814   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:12.254875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:12.291346   68713 cri.go:89] found id: ""
	I0815 18:41:12.291376   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.291387   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:12.291395   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:12.291456   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:12.324832   68713 cri.go:89] found id: ""
	I0815 18:41:12.324867   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.324878   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:12.324886   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:12.324950   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:12.360172   68713 cri.go:89] found id: ""
	I0815 18:41:12.360193   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.360201   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:12.360206   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:12.360251   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:12.394671   68713 cri.go:89] found id: ""
	I0815 18:41:12.394700   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.394710   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:12.394731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:12.394800   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:12.428951   68713 cri.go:89] found id: ""
	I0815 18:41:12.428999   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.429007   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:12.429013   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:12.429057   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:12.466035   68713 cri.go:89] found id: ""
	I0815 18:41:12.466061   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.466069   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:12.466075   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:12.466125   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:12.500003   68713 cri.go:89] found id: ""
	I0815 18:41:12.500031   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.500042   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:12.500050   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:12.500105   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:12.537433   68713 cri.go:89] found id: ""
	I0815 18:41:12.537457   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.537464   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:12.537473   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:12.537484   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:12.586768   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:12.586809   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:12.600549   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:12.600578   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:12.673112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:12.673138   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:12.673154   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:12.754689   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:12.754726   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:11.348767   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:13.349973   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:12.158249   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:13.158354   68429 pod_ready.go:82] duration metric: took 4m0.006607137s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	E0815 18:41:13.158373   68429 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 18:41:13.158381   68429 pod_ready.go:39] duration metric: took 4m7.064501997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:13.158395   68429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:13.158423   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:13.158467   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:13.203746   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:13.203771   68429 cri.go:89] found id: ""
	I0815 18:41:13.203779   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:13.203840   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.208188   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:13.208248   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:13.245326   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:13.245351   68429 cri.go:89] found id: ""
	I0815 18:41:13.245359   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:13.245412   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.250212   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:13.250281   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:13.296537   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:13.296565   68429 cri.go:89] found id: ""
	I0815 18:41:13.296576   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:13.296634   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.300823   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:13.300881   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:13.337973   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:13.338018   68429 cri.go:89] found id: ""
	I0815 18:41:13.338031   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:13.338083   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.342251   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:13.342307   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:13.379921   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:13.379948   68429 cri.go:89] found id: ""
	I0815 18:41:13.379957   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:13.380005   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.384451   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:13.384539   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:13.421077   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:13.421113   68429 cri.go:89] found id: ""
	I0815 18:41:13.421122   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:13.421180   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.425566   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:13.425640   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:13.468663   68429 cri.go:89] found id: ""
	I0815 18:41:13.468688   68429 logs.go:276] 0 containers: []
	W0815 18:41:13.468696   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:13.468701   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:13.468753   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:13.506689   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:13.506711   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:13.506715   68429 cri.go:89] found id: ""
	I0815 18:41:13.506723   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:13.506784   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.511177   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.515519   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:13.515543   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:13.583771   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:13.583806   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:13.714906   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:13.714945   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:13.766512   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:13.766548   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:13.818416   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:13.818450   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:13.859035   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:13.859073   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:13.901515   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:13.901546   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:14.437262   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:14.437304   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:14.453511   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:14.453551   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:14.489238   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:14.489267   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:14.540141   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:14.540184   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:14.574758   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:14.574785   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:14.609370   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:14.609398   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:15.294667   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:15.307758   68713 kubeadm.go:597] duration metric: took 4m2.67500099s to restartPrimaryControlPlane
	W0815 18:41:15.307840   68713 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 18:41:15.307872   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:41:15.761255   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:15.776049   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:41:15.786643   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:41:15.796517   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:41:15.796537   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:41:15.796585   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:41:15.806118   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:41:15.806167   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:41:15.816363   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:41:15.826396   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:41:15.826449   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:41:15.836538   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.847035   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:41:15.847093   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.857475   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:41:15.867084   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:41:15.867144   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:41:15.879736   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:41:15.954497   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:41:15.954588   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:41:16.098128   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:41:16.098244   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:41:16.098345   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:41:16.288507   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:41:16.290439   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:41:16.290555   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:41:16.290656   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:41:16.290756   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:41:16.290831   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:41:16.290923   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:41:16.291003   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:41:16.291096   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:41:16.291182   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:41:16.291280   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:41:16.291396   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:41:16.291457   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:41:16.291509   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:41:16.363570   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:41:16.549782   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:41:16.789250   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:41:16.983388   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:41:17.004293   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:41:17.006438   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:41:17.006485   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:41:17.154583   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:41:17.156594   68713 out.go:235]   - Booting up control plane ...
	I0815 18:41:17.156717   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:41:17.177351   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:41:17.179286   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:41:17.180313   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:41:17.183829   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:41:15.850424   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:18.348986   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:18.430273   68248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.281857018s)
	I0815 18:41:18.430359   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:18.445633   68248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:41:18.457459   68248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:41:18.469748   68248 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:41:18.469769   68248 kubeadm.go:157] found existing configuration files:
	
	I0815 18:41:18.469818   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:41:18.480099   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:41:18.480146   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:41:18.491871   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:41:18.501274   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:41:18.501339   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:41:18.510186   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:41:18.518803   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:41:18.518863   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:41:18.527843   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:41:18.536437   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:41:18.536514   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:41:18.545573   68248 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:41:18.596478   68248 kubeadm.go:310] W0815 18:41:18.577134    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:41:18.597311   68248 kubeadm.go:310] W0815 18:41:18.577958    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:41:18.709937   68248 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:41:17.151343   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:17.173653   68429 api_server.go:72] duration metric: took 4m18.293407117s to wait for apiserver process to appear ...
	I0815 18:41:17.173677   68429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:41:17.173724   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:17.173784   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:17.211484   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:17.211509   68429 cri.go:89] found id: ""
	I0815 18:41:17.211518   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:17.211583   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.216011   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:17.216107   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:17.265454   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:17.265486   68429 cri.go:89] found id: ""
	I0815 18:41:17.265497   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:17.265554   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.269804   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:17.269868   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:17.310339   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:17.310363   68429 cri.go:89] found id: ""
	I0815 18:41:17.310371   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:17.310435   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.315639   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:17.315695   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:17.352364   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:17.352387   68429 cri.go:89] found id: ""
	I0815 18:41:17.352396   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:17.352452   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.356782   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:17.356848   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:17.396704   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:17.396733   68429 cri.go:89] found id: ""
	I0815 18:41:17.396744   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:17.396799   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.400920   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:17.400985   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:17.440361   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:17.440390   68429 cri.go:89] found id: ""
	I0815 18:41:17.440400   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:17.440464   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.445057   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:17.445127   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:17.487341   68429 cri.go:89] found id: ""
	I0815 18:41:17.487369   68429 logs.go:276] 0 containers: []
	W0815 18:41:17.487380   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:17.487388   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:17.487446   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:17.528197   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:17.528218   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:17.528223   68429 cri.go:89] found id: ""
	I0815 18:41:17.528229   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:17.528285   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.532536   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.536745   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:17.536768   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:17.574236   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:17.574268   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:17.617822   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:17.617853   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:17.673009   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:17.673037   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:17.717620   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:17.717647   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:17.764641   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:17.764671   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:17.815586   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:17.815618   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:17.855287   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:17.855310   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:17.906486   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:17.906514   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:17.941540   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:17.941562   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:18.373461   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:18.373497   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:18.454203   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:18.454244   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:18.470284   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:18.470315   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:20.349635   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:22.350034   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:21.080947   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:41:21.085334   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0815 18:41:21.086420   68429 api_server.go:141] control plane version: v1.31.0
	I0815 18:41:21.086442   68429 api_server.go:131] duration metric: took 3.912756949s to wait for apiserver health ...
	I0815 18:41:21.086452   68429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:41:21.086481   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:21.086537   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:21.124183   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:21.124210   68429 cri.go:89] found id: ""
	I0815 18:41:21.124218   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:21.124285   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.128402   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:21.128472   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:21.164737   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:21.164768   68429 cri.go:89] found id: ""
	I0815 18:41:21.164779   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:21.164835   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.170622   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:21.170699   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:21.206823   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:21.206847   68429 cri.go:89] found id: ""
	I0815 18:41:21.206855   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:21.206910   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.211055   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:21.211128   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:21.255529   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:21.255555   68429 cri.go:89] found id: ""
	I0815 18:41:21.255565   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:21.255629   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.260062   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:21.260139   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:21.298058   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:21.298116   68429 cri.go:89] found id: ""
	I0815 18:41:21.298124   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:21.298180   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.302821   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:21.302892   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:21.340895   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:21.340925   68429 cri.go:89] found id: ""
	I0815 18:41:21.340936   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:21.341003   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.345545   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:21.345638   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:21.383180   68429 cri.go:89] found id: ""
	I0815 18:41:21.383212   68429 logs.go:276] 0 containers: []
	W0815 18:41:21.383223   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:21.383232   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:21.383301   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:21.421152   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:21.421178   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:21.421185   68429 cri.go:89] found id: ""
	I0815 18:41:21.421198   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:21.421257   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.426326   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.430307   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:21.430351   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:21.562655   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:21.562697   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:21.613436   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:21.613470   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:21.674678   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:21.674721   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:21.717283   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:21.717316   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:21.760218   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:21.760249   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:21.802313   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:21.802352   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:21.874565   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:21.874608   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:21.891629   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:21.891666   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:21.934128   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:21.934170   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:21.985467   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:21.985497   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:22.023731   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:22.023770   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:22.403584   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:22.403626   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:25.005734   68429 system_pods.go:59] 8 kube-system pods found
	I0815 18:41:25.005760   68429 system_pods.go:61] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running
	I0815 18:41:25.005766   68429 system_pods.go:61] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running
	I0815 18:41:25.005770   68429 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running
	I0815 18:41:25.005775   68429 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running
	I0815 18:41:25.005778   68429 system_pods.go:61] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:41:25.005781   68429 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running
	I0815 18:41:25.005788   68429 system_pods.go:61] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:25.005793   68429 system_pods.go:61] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:41:25.005799   68429 system_pods.go:74] duration metric: took 3.919341536s to wait for pod list to return data ...
	I0815 18:41:25.005806   68429 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:41:25.008398   68429 default_sa.go:45] found service account: "default"
	I0815 18:41:25.008419   68429 default_sa.go:55] duration metric: took 2.608281ms for default service account to be created ...
	I0815 18:41:25.008427   68429 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:41:25.012784   68429 system_pods.go:86] 8 kube-system pods found
	I0815 18:41:25.012804   68429 system_pods.go:89] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running
	I0815 18:41:25.012810   68429 system_pods.go:89] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running
	I0815 18:41:25.012817   68429 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running
	I0815 18:41:25.012821   68429 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running
	I0815 18:41:25.012825   68429 system_pods.go:89] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:41:25.012828   68429 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running
	I0815 18:41:25.012834   68429 system_pods.go:89] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:25.012838   68429 system_pods.go:89] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:41:25.012850   68429 system_pods.go:126] duration metric: took 4.415694ms to wait for k8s-apps to be running ...
	I0815 18:41:25.012858   68429 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:41:25.012905   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:25.028245   68429 system_svc.go:56] duration metric: took 15.378403ms WaitForService to wait for kubelet
	I0815 18:41:25.028272   68429 kubeadm.go:582] duration metric: took 4m26.148030358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:41:25.028290   68429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:41:25.030696   68429 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:41:25.030717   68429 node_conditions.go:123] node cpu capacity is 2
	I0815 18:41:25.030728   68429 node_conditions.go:105] duration metric: took 2.43352ms to run NodePressure ...
	I0815 18:41:25.030742   68429 start.go:241] waiting for startup goroutines ...
	I0815 18:41:25.030751   68429 start.go:246] waiting for cluster config update ...
	I0815 18:41:25.030768   68429 start.go:255] writing updated cluster config ...
	I0815 18:41:25.031028   68429 ssh_runner.go:195] Run: rm -f paused
	I0815 18:41:25.077910   68429 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:41:25.079973   68429 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-423062" cluster and "default" namespace by default
	I0815 18:41:27.911884   68248 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 18:41:27.911943   68248 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:41:27.912011   68248 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:41:27.912130   68248 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:41:27.912272   68248 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 18:41:27.912359   68248 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:41:27.913884   68248 out.go:235]   - Generating certificates and keys ...
	I0815 18:41:27.913990   68248 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:41:27.914092   68248 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:41:27.914197   68248 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:41:27.914289   68248 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:41:27.914362   68248 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:41:27.914433   68248 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:41:27.914521   68248 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:41:27.914606   68248 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:41:27.914859   68248 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:41:27.914984   68248 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:41:27.915040   68248 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:41:27.915119   68248 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:41:27.915190   68248 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:41:27.915268   68248 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 18:41:27.915336   68248 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:41:27.915419   68248 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:41:27.915500   68248 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:41:27.915606   68248 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:41:27.915691   68248 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:41:27.917229   68248 out.go:235]   - Booting up control plane ...
	I0815 18:41:27.917311   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:41:27.917377   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:41:27.917433   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:41:27.917521   68248 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:41:27.917590   68248 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:41:27.917623   68248 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:41:27.917740   68248 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 18:41:27.917829   68248 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 18:41:27.917880   68248 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00200618s
	I0815 18:41:27.917954   68248 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 18:41:27.918011   68248 kubeadm.go:310] [api-check] The API server is healthy after 5.501605719s
	I0815 18:41:27.918122   68248 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 18:41:27.918268   68248 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 18:41:27.918361   68248 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 18:41:27.918626   68248 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-555028 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 18:41:27.918723   68248 kubeadm.go:310] [bootstrap-token] Using token: 99xu37.bm6hiisu91f6rbvd
	I0815 18:41:27.920248   68248 out.go:235]   - Configuring RBAC rules ...
	I0815 18:41:27.920360   68248 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 18:41:27.920467   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 18:41:27.920651   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 18:41:27.920785   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 18:41:27.920938   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 18:41:27.921052   68248 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 18:41:27.921225   68248 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 18:41:27.921286   68248 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 18:41:27.921356   68248 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 18:41:27.921369   68248 kubeadm.go:310] 
	I0815 18:41:27.921422   68248 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 18:41:27.921428   68248 kubeadm.go:310] 
	I0815 18:41:27.921488   68248 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 18:41:27.921497   68248 kubeadm.go:310] 
	I0815 18:41:27.921521   68248 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 18:41:27.921570   68248 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 18:41:27.921619   68248 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 18:41:27.921625   68248 kubeadm.go:310] 
	I0815 18:41:27.921698   68248 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 18:41:27.921711   68248 kubeadm.go:310] 
	I0815 18:41:27.921776   68248 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 18:41:27.921787   68248 kubeadm.go:310] 
	I0815 18:41:27.921858   68248 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 18:41:27.921963   68248 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 18:41:27.922055   68248 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 18:41:27.922064   68248 kubeadm.go:310] 
	I0815 18:41:27.922166   68248 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 18:41:27.922281   68248 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 18:41:27.922306   68248 kubeadm.go:310] 
	I0815 18:41:27.922413   68248 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 99xu37.bm6hiisu91f6rbvd \
	I0815 18:41:27.922550   68248 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 \
	I0815 18:41:27.922593   68248 kubeadm.go:310] 	--control-plane 
	I0815 18:41:27.922603   68248 kubeadm.go:310] 
	I0815 18:41:27.922703   68248 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 18:41:27.922712   68248 kubeadm.go:310] 
	I0815 18:41:27.922800   68248 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 99xu37.bm6hiisu91f6rbvd \
	I0815 18:41:27.922901   68248 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 
	I0815 18:41:27.922909   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:41:27.922916   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:41:27.924596   68248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:41:24.849483   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:27.350715   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:27.926142   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:41:27.938307   68248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:41:27.958862   68248 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:41:27.958974   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:27.959032   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-555028 minikube.k8s.io/updated_at=2024_08_15T18_41_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=embed-certs-555028 minikube.k8s.io/primary=true
	I0815 18:41:28.156844   68248 ops.go:34] apiserver oom_adj: -16
	I0815 18:41:28.157122   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:28.657735   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:29.157713   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:29.658109   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:30.157486   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:30.657573   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.157463   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.658073   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.757929   68248 kubeadm.go:1113] duration metric: took 3.799012728s to wait for elevateKubeSystemPrivileges
	I0815 18:41:31.757969   68248 kubeadm.go:394] duration metric: took 5m0.607372858s to StartCluster
	I0815 18:41:31.757992   68248 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:41:31.758070   68248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:41:31.759686   68248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:41:31.759915   68248 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:41:31.759982   68248 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:41:31.760072   68248 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-555028"
	I0815 18:41:31.760090   68248 addons.go:69] Setting default-storageclass=true in profile "embed-certs-555028"
	I0815 18:41:31.760115   68248 addons.go:69] Setting metrics-server=true in profile "embed-certs-555028"
	I0815 18:41:31.760133   68248 addons.go:234] Setting addon metrics-server=true in "embed-certs-555028"
	W0815 18:41:31.760141   68248 addons.go:243] addon metrics-server should already be in state true
	I0815 18:41:31.760148   68248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-555028"
	I0815 18:41:31.760174   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.760110   68248 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-555028"
	W0815 18:41:31.760230   68248 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:41:31.760270   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.760108   68248 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:41:31.760603   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760619   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760637   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.760642   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.760658   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760708   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.761566   68248 out.go:177] * Verifying Kubernetes components...
	I0815 18:41:31.762780   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:41:31.777893   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I0815 18:41:31.778444   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.779021   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.779049   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.779496   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.780129   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.780182   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.780954   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0815 18:41:31.781146   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0815 18:41:31.781506   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.781586   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.782056   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.782061   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.782078   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.782079   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.782437   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.782494   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.782685   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.783004   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.783034   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.786246   68248 addons.go:234] Setting addon default-storageclass=true in "embed-certs-555028"
	W0815 18:41:31.786270   68248 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:41:31.786300   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.786682   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.786714   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.800152   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36619
	I0815 18:41:31.800729   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.801272   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.801295   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.801656   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.801835   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.803539   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0815 18:41:31.803751   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.804058   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.804640   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.804660   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.805007   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.805157   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.806098   68248 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:41:31.806397   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0815 18:41:31.806814   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.807269   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.807450   68248 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:41:31.807466   68248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:41:31.807484   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.807744   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.807757   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.808066   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.808889   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.808923   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.809143   68248 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:41:31.810575   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:41:31.810593   68248 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:41:31.810619   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.810648   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.811760   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.811761   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.811802   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.811948   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.812101   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.812243   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.814211   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.814653   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.814675   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.814953   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.815117   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.815271   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.815391   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.829657   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
	I0815 18:41:31.830122   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.830710   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.830734   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.831077   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.831291   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.833016   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.833271   68248 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:41:31.833285   68248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:41:31.833302   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.836248   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.836655   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.836682   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.836908   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.837097   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.837233   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.837410   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.988466   68248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:41:32.010147   68248 node_ready.go:35] waiting up to 6m0s for node "embed-certs-555028" to be "Ready" ...
	I0815 18:41:32.019505   68248 node_ready.go:49] node "embed-certs-555028" has status "Ready":"True"
	I0815 18:41:32.019529   68248 node_ready.go:38] duration metric: took 9.346825ms for node "embed-certs-555028" to be "Ready" ...
	I0815 18:41:32.019541   68248 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:32.032036   68248 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:32.125991   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:41:32.138532   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:41:32.138554   68248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:41:32.155222   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:41:32.196478   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:41:32.196517   68248 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:41:32.270461   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:41:32.270495   68248 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:41:32.405567   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:41:33.205712   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.050454437s)
	I0815 18:41:33.205772   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.205785   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.205793   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.079759984s)
	I0815 18:41:33.205826   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.205838   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206153   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.206169   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206184   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206194   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206200   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.206205   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206210   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206218   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.206202   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.206226   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206415   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206421   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206430   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206432   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.245033   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.245061   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.245328   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.245343   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.651886   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246273862s)
	I0815 18:41:33.651945   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.651960   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.652264   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.652307   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.652311   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.652326   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.652335   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.652618   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.652640   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.652650   68248 addons.go:475] Verifying addon metrics-server=true in "embed-certs-555028"
	I0815 18:41:33.652697   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.654487   68248 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 18:41:29.848462   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:31.850877   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:33.655868   68248 addons.go:510] duration metric: took 1.89588756s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 18:41:34.044605   68248 pod_ready.go:103] pod "etcd-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:34.538170   68248 pod_ready.go:93] pod "etcd-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.538199   68248 pod_ready.go:82] duration metric: took 2.506135047s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.538212   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.543160   68248 pod_ready.go:93] pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.543182   68248 pod_ready.go:82] duration metric: took 4.962289ms for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.543195   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.547126   68248 pod_ready.go:93] pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.547144   68248 pod_ready.go:82] duration metric: took 3.94279ms for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.547152   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:36.553459   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:37.555276   68248 pod_ready.go:93] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:37.555299   68248 pod_ready.go:82] duration metric: took 3.008140869s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:37.555307   68248 pod_ready.go:39] duration metric: took 5.535754922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:37.555330   68248 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:37.555378   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:37.575318   68248 api_server.go:72] duration metric: took 5.815371975s to wait for apiserver process to appear ...
	I0815 18:41:37.575344   68248 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:41:37.575361   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:41:37.580989   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I0815 18:41:37.582142   68248 api_server.go:141] control plane version: v1.31.0
	I0815 18:41:37.582164   68248 api_server.go:131] duration metric: took 6.812732ms to wait for apiserver health ...
	I0815 18:41:37.582174   68248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:41:37.589334   68248 system_pods.go:59] 9 kube-system pods found
	I0815 18:41:37.589366   68248 system_pods.go:61] "coredns-6f6b679f8f-mf6q4" [a5f7f959-715b-48a1-9f85-f267614182f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:41:37.589377   68248 system_pods.go:61] "coredns-6f6b679f8f-rc947" [3d041322-9d6b-4f46-8f58-e2991f34a297] Running
	I0815 18:41:37.589385   68248 system_pods.go:61] "etcd-embed-certs-555028" [8b533be4-dc0d-4b5e-af13-4efde0ddca33] Running
	I0815 18:41:37.589390   68248 system_pods.go:61] "kube-apiserver-embed-certs-555028" [6cbda2fc-5bf8-42d3-acee-fbf45de39d08] Running
	I0815 18:41:37.589397   68248 system_pods.go:61] "kube-controller-manager-embed-certs-555028" [e1246479-31dd-4437-b32f-4709fa627284] Running
	I0815 18:41:37.589403   68248 system_pods.go:61] "kube-proxy-ktczt" [f5e5b692-edd5-48fd-879b-7b8da4dea9fd] Running
	I0815 18:41:37.589410   68248 system_pods.go:61] "kube-scheduler-embed-certs-555028" [046100d7-8f69-4bff-8d48-c088c27b7601] Running
	I0815 18:41:37.589422   68248 system_pods.go:61] "metrics-server-6867b74b74-zkpx5" [92e18af9-7bd1-4891-b551-06ba4b293560] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:37.589431   68248 system_pods.go:61] "storage-provisioner" [d6979830-492e-4ef7-960f-2d4756de1c8f] Running
	I0815 18:41:37.589439   68248 system_pods.go:74] duration metric: took 7.257758ms to wait for pod list to return data ...
	I0815 18:41:37.589450   68248 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:41:37.592468   68248 default_sa.go:45] found service account: "default"
	I0815 18:41:37.592500   68248 default_sa.go:55] duration metric: took 3.029278ms for default service account to be created ...
	I0815 18:41:37.592511   68248 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:41:37.597697   68248 system_pods.go:86] 9 kube-system pods found
	I0815 18:41:37.597725   68248 system_pods.go:89] "coredns-6f6b679f8f-mf6q4" [a5f7f959-715b-48a1-9f85-f267614182f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:41:37.597730   68248 system_pods.go:89] "coredns-6f6b679f8f-rc947" [3d041322-9d6b-4f46-8f58-e2991f34a297] Running
	I0815 18:41:37.597736   68248 system_pods.go:89] "etcd-embed-certs-555028" [8b533be4-dc0d-4b5e-af13-4efde0ddca33] Running
	I0815 18:41:37.597740   68248 system_pods.go:89] "kube-apiserver-embed-certs-555028" [6cbda2fc-5bf8-42d3-acee-fbf45de39d08] Running
	I0815 18:41:37.597744   68248 system_pods.go:89] "kube-controller-manager-embed-certs-555028" [e1246479-31dd-4437-b32f-4709fa627284] Running
	I0815 18:41:37.597747   68248 system_pods.go:89] "kube-proxy-ktczt" [f5e5b692-edd5-48fd-879b-7b8da4dea9fd] Running
	I0815 18:41:37.597751   68248 system_pods.go:89] "kube-scheduler-embed-certs-555028" [046100d7-8f69-4bff-8d48-c088c27b7601] Running
	I0815 18:41:37.597756   68248 system_pods.go:89] "metrics-server-6867b74b74-zkpx5" [92e18af9-7bd1-4891-b551-06ba4b293560] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:37.597763   68248 system_pods.go:89] "storage-provisioner" [d6979830-492e-4ef7-960f-2d4756de1c8f] Running
	I0815 18:41:37.597769   68248 system_pods.go:126] duration metric: took 5.252997ms to wait for k8s-apps to be running ...
	I0815 18:41:37.597779   68248 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:41:37.597819   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:37.616004   68248 system_svc.go:56] duration metric: took 18.217091ms WaitForService to wait for kubelet
	I0815 18:41:37.616032   68248 kubeadm.go:582] duration metric: took 5.856091444s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:41:37.616049   68248 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:41:37.619195   68248 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:41:37.619215   68248 node_conditions.go:123] node cpu capacity is 2
	I0815 18:41:37.619223   68248 node_conditions.go:105] duration metric: took 3.169759ms to run NodePressure ...
	I0815 18:41:37.619234   68248 start.go:241] waiting for startup goroutines ...
	I0815 18:41:37.619242   68248 start.go:246] waiting for cluster config update ...
	I0815 18:41:37.619253   68248 start.go:255] writing updated cluster config ...
	I0815 18:41:37.619520   68248 ssh_runner.go:195] Run: rm -f paused
	I0815 18:41:37.669469   68248 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:41:37.671485   68248 out.go:177] * Done! kubectl is now configured to use "embed-certs-555028" cluster and "default" namespace by default
	I0815 18:41:34.350702   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:36.849248   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:39.348684   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:41.349379   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:43.848932   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:46.348801   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:48.349736   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:50.848728   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:52.850583   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:57.184855   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:41:57.185437   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:41:57.185667   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:41:54.851200   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:57.349542   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:42:02.186077   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:02.186272   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:41:59.349724   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:59.349748   67936 pod_ready.go:82] duration metric: took 4m0.007281981s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	E0815 18:41:59.349757   67936 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 18:41:59.349763   67936 pod_ready.go:39] duration metric: took 4m1.606987494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:59.349779   67936 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:59.349802   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:59.349844   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:59.395509   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:41:59.395541   67936 cri.go:89] found id: ""
	I0815 18:41:59.395552   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:41:59.395608   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.400063   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:59.400140   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:59.435356   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:41:59.435379   67936 cri.go:89] found id: ""
	I0815 18:41:59.435386   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:41:59.435431   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.440159   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:59.440213   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:59.479810   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:41:59.479841   67936 cri.go:89] found id: ""
	I0815 18:41:59.479851   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:41:59.479907   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.484341   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:59.484394   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:59.521077   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:41:59.521104   67936 cri.go:89] found id: ""
	I0815 18:41:59.521114   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:41:59.521168   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.525075   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:59.525131   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:59.564058   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:41:59.564084   67936 cri.go:89] found id: ""
	I0815 18:41:59.564093   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:41:59.564150   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.568668   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:59.568734   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:59.604385   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:41:59.604406   67936 cri.go:89] found id: ""
	I0815 18:41:59.604416   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:41:59.604473   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.609023   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:59.609095   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:59.646289   67936 cri.go:89] found id: ""
	I0815 18:41:59.646334   67936 logs.go:276] 0 containers: []
	W0815 18:41:59.646346   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:59.646355   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:59.646422   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:59.681861   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:41:59.681889   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:41:59.681895   67936 cri.go:89] found id: ""
	I0815 18:41:59.681903   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:41:59.681951   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.686379   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.690328   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:59.690353   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:59.759302   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:41:59.759338   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:41:59.798249   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:41:59.798276   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:41:59.834097   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:41:59.834129   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:41:59.885365   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:41:59.885398   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:41:59.923013   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:59.923038   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:59.938162   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:59.938192   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:00.077340   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:00.077377   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:00.122292   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:00.122323   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:00.165209   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:00.165235   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:00.201278   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:00.201317   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:00.238007   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:00.238042   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:00.709997   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:00.710043   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:03.252351   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:42:03.268074   67936 api_server.go:72] duration metric: took 4m12.770065297s to wait for apiserver process to appear ...
	I0815 18:42:03.268104   67936 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:42:03.268159   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:42:03.268227   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:42:03.305890   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:03.305913   67936 cri.go:89] found id: ""
	I0815 18:42:03.305923   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:42:03.305981   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.309958   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:42:03.310019   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:42:03.344602   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:03.344630   67936 cri.go:89] found id: ""
	I0815 18:42:03.344639   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:42:03.344696   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.349261   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:42:03.349317   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:42:03.383892   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:03.383912   67936 cri.go:89] found id: ""
	I0815 18:42:03.383919   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:42:03.383968   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.388158   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:42:03.388219   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:42:03.423264   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:03.423293   67936 cri.go:89] found id: ""
	I0815 18:42:03.423303   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:42:03.423352   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.427436   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:42:03.427496   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:42:03.470792   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:03.470819   67936 cri.go:89] found id: ""
	I0815 18:42:03.470829   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:42:03.470890   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.475884   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:42:03.475956   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:42:03.513081   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:03.513103   67936 cri.go:89] found id: ""
	I0815 18:42:03.513110   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:42:03.513161   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.517913   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:42:03.517985   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:42:03.556149   67936 cri.go:89] found id: ""
	I0815 18:42:03.556180   67936 logs.go:276] 0 containers: []
	W0815 18:42:03.556191   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:42:03.556199   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:42:03.556257   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:42:03.595987   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:03.596015   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:03.596021   67936 cri.go:89] found id: ""
	I0815 18:42:03.596030   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:42:03.596112   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.600430   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.604422   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:42:03.604443   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:42:03.676629   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:42:03.676665   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:03.717487   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:03.717514   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:03.755606   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:42:03.755632   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:03.815152   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:03.815187   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:03.857853   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:03.857882   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:04.296939   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:42:04.296983   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:42:04.312346   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:42:04.312373   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:04.424132   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:04.424162   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:04.482298   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:04.482326   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:04.526805   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:42:04.526832   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:04.564842   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:42:04.564871   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:04.602297   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:04.602323   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.137972   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:42:07.143165   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 200:
	ok
	I0815 18:42:07.144155   67936 api_server.go:141] control plane version: v1.31.0
	I0815 18:42:07.144174   67936 api_server.go:131] duration metric: took 3.876063215s to wait for apiserver health ...
	I0815 18:42:07.144182   67936 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:42:07.144201   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:42:07.144243   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:42:07.185685   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:07.185709   67936 cri.go:89] found id: ""
	I0815 18:42:07.185717   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:42:07.185782   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.190086   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:42:07.190179   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:42:07.233020   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:07.233044   67936 cri.go:89] found id: ""
	I0815 18:42:07.233053   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:42:07.233114   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.237639   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:42:07.237698   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:42:07.277613   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:07.277642   67936 cri.go:89] found id: ""
	I0815 18:42:07.277652   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:42:07.277714   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.282273   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:42:07.282346   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:42:07.324972   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:07.325003   67936 cri.go:89] found id: ""
	I0815 18:42:07.325013   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:42:07.325071   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.329402   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:42:07.329470   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:42:07.369812   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:07.369840   67936 cri.go:89] found id: ""
	I0815 18:42:07.369849   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:42:07.369902   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.373993   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:42:07.374055   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:42:07.412036   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:07.412062   67936 cri.go:89] found id: ""
	I0815 18:42:07.412072   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:42:07.412145   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.416191   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:42:07.416263   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:42:07.457677   67936 cri.go:89] found id: ""
	I0815 18:42:07.457710   67936 logs.go:276] 0 containers: []
	W0815 18:42:07.457721   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:42:07.457728   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:42:07.457792   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:42:07.498173   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:07.498199   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.498204   67936 cri.go:89] found id: ""
	I0815 18:42:07.498210   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:42:07.498268   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.502704   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.506501   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:42:07.506520   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:07.542685   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:07.542713   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:07.584070   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:42:07.584097   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:07.634780   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:07.634812   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.669410   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:07.669436   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:08.062406   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:42:08.062454   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:42:08.077171   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:42:08.077209   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:08.186125   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:08.186158   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:08.229621   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:42:08.229655   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:08.266791   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:08.266818   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:08.314172   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:42:08.314197   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:42:08.388793   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:08.388837   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:08.438287   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:42:08.438317   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:10.990845   67936 system_pods.go:59] 8 kube-system pods found
	I0815 18:42:10.990875   67936 system_pods.go:61] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running
	I0815 18:42:10.990879   67936 system_pods.go:61] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running
	I0815 18:42:10.990883   67936 system_pods.go:61] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running
	I0815 18:42:10.990887   67936 system_pods.go:61] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running
	I0815 18:42:10.990890   67936 system_pods.go:61] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running
	I0815 18:42:10.990894   67936 system_pods.go:61] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running
	I0815 18:42:10.990900   67936 system_pods.go:61] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:42:10.990905   67936 system_pods.go:61] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running
	I0815 18:42:10.990913   67936 system_pods.go:74] duration metric: took 3.846725869s to wait for pod list to return data ...
	I0815 18:42:10.990919   67936 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:42:10.993933   67936 default_sa.go:45] found service account: "default"
	I0815 18:42:10.993958   67936 default_sa.go:55] duration metric: took 3.032805ms for default service account to be created ...
	I0815 18:42:10.993968   67936 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:42:10.998531   67936 system_pods.go:86] 8 kube-system pods found
	I0815 18:42:10.998553   67936 system_pods.go:89] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running
	I0815 18:42:10.998558   67936 system_pods.go:89] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running
	I0815 18:42:10.998562   67936 system_pods.go:89] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running
	I0815 18:42:10.998567   67936 system_pods.go:89] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running
	I0815 18:42:10.998570   67936 system_pods.go:89] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running
	I0815 18:42:10.998575   67936 system_pods.go:89] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running
	I0815 18:42:10.998582   67936 system_pods.go:89] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:42:10.998586   67936 system_pods.go:89] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running
	I0815 18:42:10.998592   67936 system_pods.go:126] duration metric: took 4.619003ms to wait for k8s-apps to be running ...
	I0815 18:42:10.998598   67936 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:42:10.998638   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:42:11.015236   67936 system_svc.go:56] duration metric: took 16.627802ms WaitForService to wait for kubelet
	I0815 18:42:11.015260   67936 kubeadm.go:582] duration metric: took 4m20.517256799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:42:11.015280   67936 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:42:11.018544   67936 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:42:11.018570   67936 node_conditions.go:123] node cpu capacity is 2
	I0815 18:42:11.018584   67936 node_conditions.go:105] duration metric: took 3.298753ms to run NodePressure ...
	I0815 18:42:11.018598   67936 start.go:241] waiting for startup goroutines ...
	I0815 18:42:11.018611   67936 start.go:246] waiting for cluster config update ...
	I0815 18:42:11.018626   67936 start.go:255] writing updated cluster config ...
	I0815 18:42:11.018907   67936 ssh_runner.go:195] Run: rm -f paused
	I0815 18:42:11.065371   67936 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:42:11.067513   67936 out.go:177] * Done! kubectl is now configured to use "no-preload-599042" cluster and "default" namespace by default
	I0815 18:42:12.186839   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:12.187041   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:42:32.187938   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:32.188123   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.189799   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:43:12.190012   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.190023   68713 kubeadm.go:310] 
	I0815 18:43:12.190075   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:43:12.190133   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:43:12.190148   68713 kubeadm.go:310] 
	I0815 18:43:12.190205   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:43:12.190265   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:43:12.190394   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:43:12.190403   68713 kubeadm.go:310] 
	I0815 18:43:12.190523   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:43:12.190571   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:43:12.190627   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:43:12.190636   68713 kubeadm.go:310] 
	I0815 18:43:12.190772   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:43:12.190928   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:43:12.190950   68713 kubeadm.go:310] 
	I0815 18:43:12.191108   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:43:12.191218   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:43:12.191344   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:43:12.191478   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:43:12.191504   68713 kubeadm.go:310] 
	I0815 18:43:12.192283   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:43:12.192421   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:43:12.192523   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0815 18:43:12.192655   68713 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 18:43:12.192699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:43:12.658571   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:43:12.675797   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:43:12.687340   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:43:12.687370   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:43:12.687422   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:43:12.698401   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:43:12.698464   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:43:12.709632   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:43:12.720330   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:43:12.720386   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:43:12.731593   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.742122   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:43:12.742185   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.753042   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:43:12.762799   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:43:12.762855   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:43:12.772788   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:43:12.987927   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:45:08.956975   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:45:08.957069   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 18:45:08.958834   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:45:08.958904   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:45:08.958993   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:45:08.959133   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:45:08.959280   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:45:08.959376   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:45:08.961205   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:45:08.961294   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:45:08.961352   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:45:08.961424   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:45:08.961475   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:45:08.961536   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:45:08.961581   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:45:08.961637   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:45:08.961689   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:45:08.961795   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:45:08.961910   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:45:08.961971   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:45:08.962030   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:45:08.962078   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:45:08.962127   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:45:08.962214   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:45:08.962316   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:45:08.962448   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:45:08.962565   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:45:08.962626   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:45:08.962724   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:45:08.964403   68713 out.go:235]   - Booting up control plane ...
	I0815 18:45:08.964526   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:45:08.964631   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:45:08.964736   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:45:08.964866   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:45:08.965043   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:45:08.965121   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:45:08.965225   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965418   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965508   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965703   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965766   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965919   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965981   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966140   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966200   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966381   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966389   68713 kubeadm.go:310] 
	I0815 18:45:08.966438   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:45:08.966473   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:45:08.966481   68713 kubeadm.go:310] 
	I0815 18:45:08.966533   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:45:08.966580   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:45:08.966711   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:45:08.966718   68713 kubeadm.go:310] 
	I0815 18:45:08.966844   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:45:08.966900   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:45:08.966948   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:45:08.966958   68713 kubeadm.go:310] 
	I0815 18:45:08.967082   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:45:08.967201   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:45:08.967214   68713 kubeadm.go:310] 
	I0815 18:45:08.967341   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:45:08.967450   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:45:08.967548   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:45:08.967646   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:45:08.967678   68713 kubeadm.go:310] 
	I0815 18:45:08.967716   68713 kubeadm.go:394] duration metric: took 7m56.388213745s to StartCluster
	I0815 18:45:08.967768   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:45:08.967834   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:45:09.013913   68713 cri.go:89] found id: ""
	I0815 18:45:09.013943   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.013954   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:45:09.013961   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:45:09.014030   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:45:09.051370   68713 cri.go:89] found id: ""
	I0815 18:45:09.051395   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.051403   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:45:09.051409   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:45:09.051477   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:45:09.086615   68713 cri.go:89] found id: ""
	I0815 18:45:09.086646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.086653   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:45:09.086659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:45:09.086708   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:45:09.122335   68713 cri.go:89] found id: ""
	I0815 18:45:09.122370   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.122381   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:45:09.122389   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:45:09.122453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:45:09.163207   68713 cri.go:89] found id: ""
	I0815 18:45:09.163232   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.163241   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:45:09.163247   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:45:09.163308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:45:09.199396   68713 cri.go:89] found id: ""
	I0815 18:45:09.199426   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.199437   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:45:09.199444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:45:09.199504   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:45:09.235073   68713 cri.go:89] found id: ""
	I0815 18:45:09.235101   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.235112   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:45:09.235120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:45:09.235180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:45:09.271614   68713 cri.go:89] found id: ""
	I0815 18:45:09.271646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.271659   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:45:09.271671   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:45:09.271686   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:45:09.372192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:45:09.372214   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:45:09.372231   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:45:09.496743   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:45:09.496780   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:45:09.540434   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:45:09.540471   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:45:09.595546   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:45:09.595584   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 18:45:09.609831   68713 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 18:45:09.609885   68713 out.go:270] * 
	W0815 18:45:09.609942   68713 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.609956   68713 out.go:270] * 
	W0815 18:45:09.610794   68713 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 18:45:09.614213   68713 out.go:201] 
	W0815 18:45:09.615379   68713 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.615420   68713 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 18:45:09.615437   68713 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 18:45:09.616840   68713 out.go:201] 
	
	
	==> CRI-O <==
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.468990954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747511468959828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b5519eb-c8c7-40b6-b7ec-904acffdccb6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.469676140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c891a8f-13c4-422b-9a7a-780a14c65b04 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.469723711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c891a8f-13c4-422b-9a7a-780a14c65b04 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.469762627Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3c891a8f-13c4-422b-9a7a-780a14c65b04 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.502744128Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b104a2be-6699-43de-99dd-b8a20cf52bd9 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.502820629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b104a2be-6699-43de-99dd-b8a20cf52bd9 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.504252016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ad44b48a-fcca-4a08-85d7-10e49fe72e2c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.504698295Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747511504677172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ad44b48a-fcca-4a08-85d7-10e49fe72e2c name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.505155053Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7100998b-4714-4cbf-a454-f72adc33b666 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.505210697Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7100998b-4714-4cbf-a454-f72adc33b666 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.505243981Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7100998b-4714-4cbf-a454-f72adc33b666 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.536156132Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06b615df-22da-4c4c-b583-65764a468da8 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.536228983Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06b615df-22da-4c4c-b583-65764a468da8 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.537326440Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6457fc4-9a5a-4e2b-9d17-896c77120967 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.537755356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747511537727403,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6457fc4-9a5a-4e2b-9d17-896c77120967 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.538279105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4688216e-b40e-479e-9862-0308edb33764 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.538332999Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4688216e-b40e-479e-9862-0308edb33764 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.538365127Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4688216e-b40e-479e-9862-0308edb33764 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.569226547Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50480dc4-1193-42fd-afea-91d84a2c44bd name=/runtime.v1.RuntimeService/Version
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.569299310Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50480dc4-1193-42fd-afea-91d84a2c44bd name=/runtime.v1.RuntimeService/Version
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.570311024Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d9f62882-46fb-4558-9667-261c442af0bd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.570744077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747511570722264,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d9f62882-46fb-4558-9667-261c442af0bd name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.571332889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4edf4693-2bf7-4e7c-91f9-9876235bd3a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.571375351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4edf4693-2bf7-4e7c-91f9-9876235bd3a7 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:45:11 old-k8s-version-278865 crio[649]: time="2024-08-15 18:45:11.571406438Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4edf4693-2bf7-4e7c-91f9-9876235bd3a7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug15 18:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055068] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040001] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.968285] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.579604] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.625301] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug15 18:37] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.058621] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064012] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.191090] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.131642] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.264819] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.501610] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.065792] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.624202] systemd-fstab-generator[1024]: Ignoring "noauto" option for root device
	[ +13.041505] kauditd_printk_skb: 46 callbacks suppressed
	[Aug15 18:41] systemd-fstab-generator[5085]: Ignoring "noauto" option for root device
	[Aug15 18:43] systemd-fstab-generator[5373]: Ignoring "noauto" option for root device
	[  +0.068065] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:45:11 up 8 min,  0 users,  load average: 0.04, 0.13, 0.08
	Linux old-k8s-version-278865 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000146620, 0xc00009c0c0)
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]: goroutine 149 [runnable]:
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000a8a0f0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001a9500, 0x0, 0x0)
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000a8e000)
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]: E0815 18:45:08.657731    5551 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.89:8443: connect: connection refused
	Aug 15 18:45:08 old-k8s-version-278865 kubelet[5551]: E0815 18:45:08.658299    5551 reflector.go:138] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dold-k8s-version-278865&limit=500&resourceVersion=0": dial tcp 192.168.39.89:8443: connect: connection refused
	Aug 15 18:45:08 old-k8s-version-278865 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 15 18:45:08 old-k8s-version-278865 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 15 18:45:09 old-k8s-version-278865 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Aug 15 18:45:09 old-k8s-version-278865 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 15 18:45:09 old-k8s-version-278865 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 15 18:45:09 old-k8s-version-278865 kubelet[5601]: I0815 18:45:09.401958    5601 server.go:416] Version: v1.20.0
	Aug 15 18:45:09 old-k8s-version-278865 kubelet[5601]: I0815 18:45:09.402352    5601 server.go:837] Client rotation is on, will bootstrap in background
	Aug 15 18:45:09 old-k8s-version-278865 kubelet[5601]: I0815 18:45:09.405373    5601 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 15 18:45:09 old-k8s-version-278865 kubelet[5601]: I0815 18:45:09.406740    5601 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 15 18:45:09 old-k8s-version-278865 kubelet[5601]: W0815 18:45:09.406816    5601 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-278865 -n old-k8s-version-278865
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-278865 -n old-k8s-version-278865: exit status 2 (224.082454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-278865" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (740.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-423062 -n default-k8s-diff-port-423062
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-15 18:50:25.601466739 +0000 UTC m=+6316.579571909
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-423062 -n default-k8s-diff-port-423062
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-423062 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-423062 logs -n 25: (2.117102423s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-729203                           | kubernetes-upgrade-729203    | jenkins | v1.33.1 | 15 Aug 24 18:26 UTC | 15 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-498665                              | stopped-upgrade-498665       | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:27 UTC |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-729203                           | kubernetes-upgrade-729203    | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:27 UTC |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-003860                              | cert-expiration-003860       | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-003860                              | cert-expiration-003860       | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-698209 | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | disable-driver-mounts-698209                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:29 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-599042             | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-555028            | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-423062  | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-278865        | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-599042                  | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC | 15 Aug 24 18:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-555028                 | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-423062       | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-278865             | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:32:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:32:52.788403   68713 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:32:52.788704   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788715   68713 out.go:358] Setting ErrFile to fd 2...
	I0815 18:32:52.788719   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788916   68713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:32:52.789431   68713 out.go:352] Setting JSON to false
	I0815 18:32:52.790297   68713 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8119,"bootTime":1723738654,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:32:52.790355   68713 start.go:139] virtualization: kvm guest
	I0815 18:32:52.792478   68713 out.go:177] * [old-k8s-version-278865] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:32:52.793818   68713 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:32:52.793864   68713 notify.go:220] Checking for updates...
	I0815 18:32:52.796618   68713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:32:52.797914   68713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:32:52.799054   68713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:32:52.800337   68713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:32:52.801448   68713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:32:52.803087   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:32:52.803465   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.803521   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.819013   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0815 18:32:52.819447   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.819966   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.819985   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.820284   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.820482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.822582   68713 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 18:32:52.824024   68713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:32:52.824380   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.824425   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.839486   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0815 18:32:52.839905   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.840345   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.840367   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.840730   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.840904   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.876811   68713 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:32:52.878075   68713 start.go:297] selected driver: kvm2
	I0815 18:32:52.878098   68713 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.878240   68713 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:32:52.878920   68713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.879001   68713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:32:52.894158   68713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:32:52.894895   68713 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:32:52.894953   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:32:52.894969   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:32:52.895020   68713 start.go:340] cluster config:
	{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.895203   68713 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.897304   68713 out.go:177] * Starting "old-k8s-version-278865" primary control-plane node in "old-k8s-version-278865" cluster
	I0815 18:32:51.348753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:32:52.898737   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:32:52.898785   68713 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:32:52.898795   68713 cache.go:56] Caching tarball of preloaded images
	I0815 18:32:52.898861   68713 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:32:52.898871   68713 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0815 18:32:52.898962   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:32:52.899159   68713 start.go:360] acquireMachinesLock for old-k8s-version-278865: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:32:57.424754   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:00.496786   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:06.576768   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:09.648759   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:15.728760   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:18.800783   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:24.880725   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:27.952781   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:34.032763   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:37.104737   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:43.184796   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:46.260701   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:52.336771   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:55.408745   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:01.488742   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:04.560759   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:10.640771   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:13.712753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:19.792795   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:22.864720   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:28.944769   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:32.016745   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:38.096783   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:41.168739   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:47.248802   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:50.320778   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:56.400717   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:59.472780   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:05.552762   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:08.624707   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:14.704753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:17.776748   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:23.856782   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:26.932742   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:33.008795   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:36.080807   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:42.160767   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:45.232800   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:51.312780   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:54.384719   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:00.464740   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:03.536736   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:06.540805   68248 start.go:364] duration metric: took 4m1.610543673s to acquireMachinesLock for "embed-certs-555028"
	I0815 18:36:06.540869   68248 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:06.540881   68248 fix.go:54] fixHost starting: 
	I0815 18:36:06.541241   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:06.541272   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:06.556680   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0815 18:36:06.557105   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:06.557518   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:36:06.557540   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:06.557831   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:06.558059   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:06.558202   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:36:06.559702   68248 fix.go:112] recreateIfNeeded on embed-certs-555028: state=Stopped err=<nil>
	I0815 18:36:06.559724   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	W0815 18:36:06.559877   68248 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:06.561410   68248 out.go:177] * Restarting existing kvm2 VM for "embed-certs-555028" ...
	I0815 18:36:06.538474   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:06.538508   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:36:06.538805   67936 buildroot.go:166] provisioning hostname "no-preload-599042"
	I0815 18:36:06.538831   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:36:06.539016   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:36:06.540664   67936 machine.go:96] duration metric: took 4m37.431349663s to provisionDockerMachine
	I0815 18:36:06.540702   67936 fix.go:56] duration metric: took 4m37.452150687s for fixHost
	I0815 18:36:06.540709   67936 start.go:83] releasing machines lock for "no-preload-599042", held for 4m37.452172562s
	W0815 18:36:06.540732   67936 start.go:714] error starting host: provision: host is not running
	W0815 18:36:06.540801   67936 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0815 18:36:06.540809   67936 start.go:729] Will try again in 5 seconds ...
	I0815 18:36:06.562384   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Start
	I0815 18:36:06.562537   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring networks are active...
	I0815 18:36:06.563252   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring network default is active
	I0815 18:36:06.563554   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring network mk-embed-certs-555028 is active
	I0815 18:36:06.563908   68248 main.go:141] libmachine: (embed-certs-555028) Getting domain xml...
	I0815 18:36:06.564614   68248 main.go:141] libmachine: (embed-certs-555028) Creating domain...
	I0815 18:36:07.763793   68248 main.go:141] libmachine: (embed-certs-555028) Waiting to get IP...
	I0815 18:36:07.764733   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:07.765099   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:07.765200   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:07.765085   69393 retry.go:31] will retry after 206.840107ms: waiting for machine to come up
	I0815 18:36:07.973596   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:07.974069   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:07.974093   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:07.974019   69393 retry.go:31] will retry after 319.002956ms: waiting for machine to come up
	I0815 18:36:08.294670   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:08.295125   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:08.295154   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:08.295073   69393 retry.go:31] will retry after 425.99373ms: waiting for machine to come up
	I0815 18:36:08.722549   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:08.722954   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:08.722985   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:08.722903   69393 retry.go:31] will retry after 428.077891ms: waiting for machine to come up
	I0815 18:36:09.152674   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:09.153155   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:09.153187   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:09.153108   69393 retry.go:31] will retry after 476.041155ms: waiting for machine to come up
	I0815 18:36:09.630963   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:09.631456   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:09.631485   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:09.631395   69393 retry.go:31] will retry after 751.179582ms: waiting for machine to come up
	I0815 18:36:11.542364   67936 start.go:360] acquireMachinesLock for no-preload-599042: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:36:10.384466   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:10.384888   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:10.384916   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:10.384842   69393 retry.go:31] will retry after 1.028202731s: waiting for machine to come up
	I0815 18:36:11.414905   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:11.415343   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:11.415373   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:11.415283   69393 retry.go:31] will retry after 1.129105535s: waiting for machine to come up
	I0815 18:36:12.545941   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:12.546365   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:12.546387   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:12.546320   69393 retry.go:31] will retry after 1.734323812s: waiting for machine to come up
	I0815 18:36:14.283247   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:14.283622   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:14.283653   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:14.283569   69393 retry.go:31] will retry after 1.657173562s: waiting for machine to come up
	I0815 18:36:15.943329   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:15.943730   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:15.943760   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:15.943669   69393 retry.go:31] will retry after 2.349664822s: waiting for machine to come up
	I0815 18:36:18.295797   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:18.296330   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:18.296363   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:18.296264   69393 retry.go:31] will retry after 2.889119284s: waiting for machine to come up
	I0815 18:36:21.186597   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:21.186983   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:21.187004   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:21.186945   69393 retry.go:31] will retry after 2.79101595s: waiting for machine to come up
	I0815 18:36:23.981271   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.981732   68248 main.go:141] libmachine: (embed-certs-555028) Found IP for machine: 192.168.50.234
	I0815 18:36:23.981761   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has current primary IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.981770   68248 main.go:141] libmachine: (embed-certs-555028) Reserving static IP address...
	I0815 18:36:23.982166   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "embed-certs-555028", mac: "52:54:00:5c:59:7b", ip: "192.168.50.234"} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:23.982189   68248 main.go:141] libmachine: (embed-certs-555028) DBG | skip adding static IP to network mk-embed-certs-555028 - found existing host DHCP lease matching {name: "embed-certs-555028", mac: "52:54:00:5c:59:7b", ip: "192.168.50.234"}
	I0815 18:36:23.982200   68248 main.go:141] libmachine: (embed-certs-555028) Reserved static IP address: 192.168.50.234
	I0815 18:36:23.982210   68248 main.go:141] libmachine: (embed-certs-555028) Waiting for SSH to be available...
	I0815 18:36:23.982220   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Getting to WaitForSSH function...
	I0815 18:36:23.984253   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.984578   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:23.984601   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.984696   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Using SSH client type: external
	I0815 18:36:23.984720   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa (-rw-------)
	I0815 18:36:23.984752   68248 main.go:141] libmachine: (embed-certs-555028) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:36:23.984763   68248 main.go:141] libmachine: (embed-certs-555028) DBG | About to run SSH command:
	I0815 18:36:23.984772   68248 main.go:141] libmachine: (embed-certs-555028) DBG | exit 0
	I0815 18:36:24.104618   68248 main.go:141] libmachine: (embed-certs-555028) DBG | SSH cmd err, output: <nil>: 
	I0815 18:36:24.105023   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetConfigRaw
	I0815 18:36:24.105694   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:24.108191   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.108532   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.108568   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.108844   68248 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/config.json ...
	I0815 18:36:24.109037   68248 machine.go:93] provisionDockerMachine start ...
	I0815 18:36:24.109055   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:24.109249   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.111363   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.111680   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.111725   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.111821   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.111989   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.112141   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.112277   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.112454   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.112662   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.112673   68248 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:36:24.208951   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:36:24.208986   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.209255   68248 buildroot.go:166] provisioning hostname "embed-certs-555028"
	I0815 18:36:24.209285   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.209514   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.212393   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.212850   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.212878   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.213010   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.213198   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.213340   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.213466   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.213663   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.213821   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.213832   68248 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-555028 && echo "embed-certs-555028" | sudo tee /etc/hostname
	I0815 18:36:24.327157   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-555028
	
	I0815 18:36:24.327191   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.330193   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.330577   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.330607   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.330824   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.331029   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.331174   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.331325   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.331508   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.331713   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.331732   68248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-555028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-555028/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-555028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:36:24.437909   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:24.437938   68248 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:36:24.437977   68248 buildroot.go:174] setting up certificates
	I0815 18:36:24.437987   68248 provision.go:84] configureAuth start
	I0815 18:36:24.437996   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.438264   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:24.440637   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.440961   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.440993   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.441089   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.443071   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.443415   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.443448   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.443562   68248 provision.go:143] copyHostCerts
	I0815 18:36:24.443622   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:36:24.443643   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:36:24.443726   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:36:24.443843   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:36:24.443855   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:36:24.443893   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:36:24.443968   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:36:24.443977   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:36:24.444007   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:36:24.444074   68248 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.embed-certs-555028 san=[127.0.0.1 192.168.50.234 embed-certs-555028 localhost minikube]
	I0815 18:36:24.507119   68248 provision.go:177] copyRemoteCerts
	I0815 18:36:24.507177   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:36:24.507202   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.509835   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.510230   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.510260   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.510403   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.510606   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.510735   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.510842   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:24.590623   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:36:24.615635   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 18:36:24.643400   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:36:24.670394   68248 provision.go:87] duration metric: took 232.396705ms to configureAuth
	I0815 18:36:24.670415   68248 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:36:24.670609   68248 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:24.670694   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.673303   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.673685   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.673721   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.673863   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.674050   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.674222   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.674354   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.674513   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.674673   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.674688   68248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:36:25.149223   68429 start.go:364] duration metric: took 3m59.233021018s to acquireMachinesLock for "default-k8s-diff-port-423062"
	I0815 18:36:25.149295   68429 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:25.149306   68429 fix.go:54] fixHost starting: 
	I0815 18:36:25.149757   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:25.149799   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:25.166940   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41811
	I0815 18:36:25.167342   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:25.167882   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:25.167910   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:25.168179   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:25.168383   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:25.168553   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:25.170072   68429 fix.go:112] recreateIfNeeded on default-k8s-diff-port-423062: state=Stopped err=<nil>
	I0815 18:36:25.170106   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	W0815 18:36:25.170263   68429 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:25.172091   68429 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-423062" ...
	I0815 18:36:25.173641   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Start
	I0815 18:36:25.173831   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring networks are active...
	I0815 18:36:25.174594   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring network default is active
	I0815 18:36:25.174981   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring network mk-default-k8s-diff-port-423062 is active
	I0815 18:36:25.175410   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Getting domain xml...
	I0815 18:36:25.176275   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Creating domain...
	I0815 18:36:24.928110   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:36:24.928140   68248 machine.go:96] duration metric: took 819.089931ms to provisionDockerMachine
	I0815 18:36:24.928156   68248 start.go:293] postStartSetup for "embed-certs-555028" (driver="kvm2")
	I0815 18:36:24.928170   68248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:36:24.928190   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:24.928513   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:36:24.928542   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.931301   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.931756   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.931799   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.931846   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.932028   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.932311   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.932477   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.011373   68248 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:36:25.015677   68248 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:36:25.015707   68248 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:36:25.015798   68248 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:36:25.015900   68248 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:36:25.016014   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:36:25.025465   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:25.049662   68248 start.go:296] duration metric: took 121.491742ms for postStartSetup
	I0815 18:36:25.049704   68248 fix.go:56] duration metric: took 18.508823511s for fixHost
	I0815 18:36:25.049728   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.052184   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.052538   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.052583   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.052718   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.052904   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.053099   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.053271   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.053438   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:25.053604   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:25.053614   68248 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:36:25.149075   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723746985.122186042
	
	I0815 18:36:25.149095   68248 fix.go:216] guest clock: 1723746985.122186042
	I0815 18:36:25.149103   68248 fix.go:229] Guest: 2024-08-15 18:36:25.122186042 +0000 UTC Remote: 2024-08-15 18:36:25.049708543 +0000 UTC m=+260.258232753 (delta=72.477499ms)
	I0815 18:36:25.149131   68248 fix.go:200] guest clock delta is within tolerance: 72.477499ms
	I0815 18:36:25.149135   68248 start.go:83] releasing machines lock for "embed-certs-555028", held for 18.608287436s
	I0815 18:36:25.149158   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.149408   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:25.152125   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.152542   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.152568   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.152742   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153260   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153439   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153539   68248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:36:25.153587   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.153639   68248 ssh_runner.go:195] Run: cat /version.json
	I0815 18:36:25.153659   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.156311   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156504   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156740   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.156769   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156847   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.156883   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.157040   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.157122   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.157303   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.157318   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.157473   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.157479   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.157647   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.157647   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.233725   68248 ssh_runner.go:195] Run: systemctl --version
	I0815 18:36:25.253737   68248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:36:25.402047   68248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:36:25.409250   68248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:36:25.409328   68248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:36:25.426491   68248 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:36:25.426514   68248 start.go:495] detecting cgroup driver to use...
	I0815 18:36:25.426580   68248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:36:25.445177   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:36:25.459432   68248 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:36:25.459512   68248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:36:25.473777   68248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:36:25.488144   68248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:36:25.627700   68248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:36:25.791278   68248 docker.go:233] disabling docker service ...
	I0815 18:36:25.791349   68248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:36:25.810146   68248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:36:25.825131   68248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:36:25.975457   68248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:36:26.106757   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:36:26.123053   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:36:26.142739   68248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:36:26.142804   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.153163   68248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:36:26.153217   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.163863   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.175028   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.192480   68248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:36:26.208933   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.219825   68248 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.245623   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.256645   68248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:36:26.265947   68248 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:36:26.266004   68248 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:36:26.278665   68248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:36:26.289519   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:26.423656   68248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:36:26.560919   68248 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:36:26.560996   68248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:36:26.565696   68248 start.go:563] Will wait 60s for crictl version
	I0815 18:36:26.565764   68248 ssh_runner.go:195] Run: which crictl
	I0815 18:36:26.569498   68248 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:36:26.609872   68248 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:36:26.609948   68248 ssh_runner.go:195] Run: crio --version
	I0815 18:36:26.645300   68248 ssh_runner.go:195] Run: crio --version
	I0815 18:36:26.681229   68248 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:36:26.682461   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:26.685495   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:26.686011   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:26.686037   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:26.686323   68248 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 18:36:26.690590   68248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:26.703512   68248 kubeadm.go:883] updating cluster {Name:embed-certs-555028 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:36:26.703679   68248 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:36:26.703748   68248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:26.740601   68248 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:36:26.740679   68248 ssh_runner.go:195] Run: which lz4
	I0815 18:36:26.744798   68248 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:36:26.748894   68248 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:36:26.748921   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:36:28.188174   68248 crio.go:462] duration metric: took 1.443420751s to copy over tarball
	I0815 18:36:28.188254   68248 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:36:26.428013   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting to get IP...
	I0815 18:36:26.428929   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.429397   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.429513   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.429391   69513 retry.go:31] will retry after 296.45967ms: waiting for machine to come up
	I0815 18:36:26.727871   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.728273   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.728298   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.728237   69513 retry.go:31] will retry after 258.379179ms: waiting for machine to come up
	I0815 18:36:26.988915   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.989398   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.989472   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.989374   69513 retry.go:31] will retry after 418.611169ms: waiting for machine to come up
	I0815 18:36:27.409905   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.410358   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.410398   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:27.410327   69513 retry.go:31] will retry after 566.642237ms: waiting for machine to come up
	I0815 18:36:27.978717   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.979183   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.979215   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:27.979125   69513 retry.go:31] will retry after 740.292473ms: waiting for machine to come up
	I0815 18:36:28.720587   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:28.720970   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:28.721008   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:28.720941   69513 retry.go:31] will retry after 610.435484ms: waiting for machine to come up
	I0815 18:36:29.333342   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:29.333696   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:29.333731   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:29.333632   69513 retry.go:31] will retry after 1.059086771s: waiting for machine to come up
	I0815 18:36:30.394125   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:30.394560   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:30.394589   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:30.394519   69513 retry.go:31] will retry after 1.279753887s: waiting for machine to come up
	I0815 18:36:30.309340   68248 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.121056035s)
	I0815 18:36:30.309382   68248 crio.go:469] duration metric: took 2.121176349s to extract the tarball
	I0815 18:36:30.309394   68248 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:36:30.346520   68248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:30.394771   68248 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:36:30.394789   68248 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:36:30.394799   68248 kubeadm.go:934] updating node { 192.168.50.234 8443 v1.31.0 crio true true} ...
	I0815 18:36:30.394951   68248 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-555028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:36:30.395033   68248 ssh_runner.go:195] Run: crio config
	I0815 18:36:30.439636   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:36:30.439663   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:30.439678   68248 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:36:30.439707   68248 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.234 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-555028 NodeName:embed-certs-555028 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:36:30.439899   68248 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-555028"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:36:30.439976   68248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:36:30.449774   68248 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:36:30.449842   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:36:30.458892   68248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0815 18:36:30.475171   68248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:36:30.490942   68248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0815 18:36:30.507498   68248 ssh_runner.go:195] Run: grep 192.168.50.234	control-plane.minikube.internal$ /etc/hosts
	I0815 18:36:30.511254   68248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:30.522772   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:30.646060   68248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:30.667948   68248 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028 for IP: 192.168.50.234
	I0815 18:36:30.667974   68248 certs.go:194] generating shared ca certs ...
	I0815 18:36:30.667994   68248 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:30.668178   68248 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:36:30.668231   68248 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:36:30.668244   68248 certs.go:256] generating profile certs ...
	I0815 18:36:30.668360   68248 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/client.key
	I0815 18:36:30.668442   68248 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.key.539203f3
	I0815 18:36:30.668524   68248 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.key
	I0815 18:36:30.668686   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:36:30.668725   68248 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:36:30.668737   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:36:30.668774   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:36:30.668807   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:36:30.668836   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:36:30.668941   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:30.669810   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:36:30.721245   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:36:30.753016   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:36:30.782005   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:36:30.815008   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 18:36:30.847615   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:36:30.871566   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:36:30.894778   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:36:30.919167   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:36:30.942597   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:36:30.965395   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:36:30.988959   68248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:36:31.005578   68248 ssh_runner.go:195] Run: openssl version
	I0815 18:36:31.011697   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:36:31.022496   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.027102   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.027154   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.033475   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:36:31.044793   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:36:31.055793   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.060642   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.060692   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.066544   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:36:31.077637   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:36:31.088468   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.093295   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.093347   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.098908   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:36:31.109856   68248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:36:31.114519   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:36:31.120709   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:36:31.126754   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:36:31.132917   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:36:31.138739   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:36:31.144785   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:36:31.150604   68248 kubeadm.go:392] StartCluster: {Name:embed-certs-555028 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:36:31.150702   68248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:36:31.150755   68248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:31.192152   68248 cri.go:89] found id: ""
	I0815 18:36:31.192253   68248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:36:31.203076   68248 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:36:31.203099   68248 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:36:31.203151   68248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:36:31.213659   68248 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:36:31.215070   68248 kubeconfig.go:125] found "embed-certs-555028" server: "https://192.168.50.234:8443"
	I0815 18:36:31.218243   68248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:36:31.228210   68248 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.234
	I0815 18:36:31.228245   68248 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:36:31.228267   68248 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:36:31.228317   68248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:31.275944   68248 cri.go:89] found id: ""
	I0815 18:36:31.276031   68248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:36:31.294466   68248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:36:31.307241   68248 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:36:31.307276   68248 kubeadm.go:157] found existing configuration files:
	
	I0815 18:36:31.307327   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:36:31.316654   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:36:31.316722   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:36:31.326475   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:36:31.335726   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:36:31.335796   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:36:31.345063   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:36:31.353576   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:36:31.353628   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:36:31.362449   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:36:31.370717   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:36:31.370792   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:36:31.379827   68248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:36:31.389001   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:31.510611   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.158537   68248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.647891555s)
	I0815 18:36:33.158574   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.376600   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.459742   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.545503   68248 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:36:33.545595   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.046191   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.546256   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.571236   68248 api_server.go:72] duration metric: took 1.025744612s to wait for apiserver process to appear ...
	I0815 18:36:34.571275   68248 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:36:34.571297   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:31.675513   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:31.676013   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:31.676042   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:31.675960   69513 retry.go:31] will retry after 1.669099573s: waiting for machine to come up
	I0815 18:36:33.348089   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:33.348611   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:33.348639   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:33.348575   69513 retry.go:31] will retry after 1.613394267s: waiting for machine to come up
	I0815 18:36:34.963674   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:34.964187   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:34.964215   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:34.964146   69513 retry.go:31] will retry after 2.128578928s: waiting for machine to come up
	I0815 18:36:37.262138   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:37.262168   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:37.262184   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:37.310539   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:37.310569   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:37.571713   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:37.590002   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:37.590062   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:38.071526   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:38.076179   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:38.076212   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:38.571714   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:38.576518   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I0815 18:36:38.582358   68248 api_server.go:141] control plane version: v1.31.0
	I0815 18:36:38.582381   68248 api_server.go:131] duration metric: took 4.011097638s to wait for apiserver health ...
	I0815 18:36:38.582393   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:36:38.582401   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:38.584203   68248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:36:38.585513   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:36:38.604350   68248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:36:38.645538   68248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:36:38.653445   68248 system_pods.go:59] 8 kube-system pods found
	I0815 18:36:38.653476   68248 system_pods.go:61] "coredns-6f6b679f8f-sjx7c" [93a037b9-1e7c-471a-b62d-d7898b2b8287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:36:38.653486   68248 system_pods.go:61] "etcd-embed-certs-555028" [7e526b10-7acd-4d25-9847-8e11e21ba8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:36:38.653495   68248 system_pods.go:61] "kube-apiserver-embed-certs-555028" [3f317b0f-15a1-4e7d-8ca5-3cdf70dee711] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:36:38.653501   68248 system_pods.go:61] "kube-controller-manager-embed-certs-555028" [431113cd-bce9-4ecb-8233-c5463875f1b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:36:38.653506   68248 system_pods.go:61] "kube-proxy-dzwt7" [a8101c7e-c010-45a3-8746-0dc20c7ef0e2] Running
	I0815 18:36:38.653513   68248 system_pods.go:61] "kube-scheduler-embed-certs-555028" [84a5d051-d8c1-4097-b92c-e2f0d7a03385] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:36:38.653520   68248 system_pods.go:61] "metrics-server-6867b74b74-wp5rn" [222160bf-6774-49a5-9f30-7582748c8a82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:36:38.653534   68248 system_pods.go:61] "storage-provisioner" [e88c8785-2d8b-47b6-850f-e6cda74a4f5a] Running
	I0815 18:36:38.653549   68248 system_pods.go:74] duration metric: took 7.98765ms to wait for pod list to return data ...
	I0815 18:36:38.653558   68248 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:36:38.656864   68248 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:36:38.656893   68248 node_conditions.go:123] node cpu capacity is 2
	I0815 18:36:38.656906   68248 node_conditions.go:105] duration metric: took 3.340245ms to run NodePressure ...
	I0815 18:36:38.656923   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:38.918518   68248 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:36:38.923148   68248 kubeadm.go:739] kubelet initialised
	I0815 18:36:38.923168   68248 kubeadm.go:740] duration metric: took 4.62305ms waiting for restarted kubelet to initialise ...
	I0815 18:36:38.923177   68248 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:38.927933   68248 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.934928   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.934953   68248 pod_ready.go:82] duration metric: took 6.994953ms for pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.934965   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.934974   68248 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.939533   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "etcd-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.939558   68248 pod_ready.go:82] duration metric: took 4.573835ms for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.939568   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "etcd-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.939575   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.943567   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.943590   68248 pod_ready.go:82] duration metric: took 4.004869ms for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.943601   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.943608   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.049176   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:39.049203   68248 pod_ready.go:82] duration metric: took 105.585473ms for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:39.049212   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:39.049219   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dzwt7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.449514   68248 pod_ready.go:93] pod "kube-proxy-dzwt7" in "kube-system" namespace has status "Ready":"True"
	I0815 18:36:39.449539   68248 pod_ready.go:82] duration metric: took 400.311062ms for pod "kube-proxy-dzwt7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.449548   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:37.094139   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:37.094640   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:37.094670   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:37.094583   69513 retry.go:31] will retry after 2.268267509s: waiting for machine to come up
	I0815 18:36:39.365595   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:39.365975   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:39.366007   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:39.365938   69513 retry.go:31] will retry after 3.286154075s: waiting for machine to come up
	I0815 18:36:44.301710   68713 start.go:364] duration metric: took 3m51.402501772s to acquireMachinesLock for "old-k8s-version-278865"
	I0815 18:36:44.301771   68713 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:44.301792   68713 fix.go:54] fixHost starting: 
	I0815 18:36:44.302227   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:44.302265   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:44.319819   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I0815 18:36:44.320335   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:44.320975   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:36:44.321003   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:44.321380   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:44.321572   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:36:44.321720   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetState
	I0815 18:36:44.323551   68713 fix.go:112] recreateIfNeeded on old-k8s-version-278865: state=Stopped err=<nil>
	I0815 18:36:44.323586   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	W0815 18:36:44.323748   68713 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:44.325761   68713 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-278865" ...
	I0815 18:36:41.456648   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:43.456917   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:42.653801   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.654221   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has current primary IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.654251   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Found IP for machine: 192.168.61.7
	I0815 18:36:42.654268   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Reserving static IP address...
	I0815 18:36:42.654714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-423062", mac: "52:54:00:83:9a:f2", ip: "192.168.61.7"} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.654759   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | skip adding static IP to network mk-default-k8s-diff-port-423062 - found existing host DHCP lease matching {name: "default-k8s-diff-port-423062", mac: "52:54:00:83:9a:f2", ip: "192.168.61.7"}
	I0815 18:36:42.654778   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Reserved static IP address: 192.168.61.7
	I0815 18:36:42.654798   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for SSH to be available...
	I0815 18:36:42.654815   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Getting to WaitForSSH function...
	I0815 18:36:42.657618   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.657968   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.657996   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.658093   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Using SSH client type: external
	I0815 18:36:42.658115   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa (-rw-------)
	I0815 18:36:42.658200   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:36:42.658223   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | About to run SSH command:
	I0815 18:36:42.658234   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | exit 0
	I0815 18:36:42.780714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | SSH cmd err, output: <nil>: 
	I0815 18:36:42.781095   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetConfigRaw
	I0815 18:36:42.781734   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:42.784384   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.784820   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.784853   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.785137   68429 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/config.json ...
	I0815 18:36:42.785364   68429 machine.go:93] provisionDockerMachine start ...
	I0815 18:36:42.785390   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:42.785599   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:42.788083   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.788436   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.788465   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.788655   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:42.788833   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.789006   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.789145   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:42.789301   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:42.789607   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:42.789625   68429 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:36:42.889002   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:36:42.889031   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:42.889237   68429 buildroot.go:166] provisioning hostname "default-k8s-diff-port-423062"
	I0815 18:36:42.889260   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:42.889434   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:42.892072   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.892422   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.892445   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.892645   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:42.892846   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.892995   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.893148   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:42.893286   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:42.893490   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:42.893505   68429 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-423062 && echo "default-k8s-diff-port-423062" | sudo tee /etc/hostname
	I0815 18:36:43.008310   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-423062
	
	I0815 18:36:43.008347   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.011091   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.011446   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.011472   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.011653   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.011864   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.012027   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.012159   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.012321   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:43.012518   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:43.012537   68429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-423062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-423062/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-423062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:36:43.121095   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:43.121123   68429 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:36:43.121157   68429 buildroot.go:174] setting up certificates
	I0815 18:36:43.121172   68429 provision.go:84] configureAuth start
	I0815 18:36:43.121186   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:43.121510   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:43.123863   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.124178   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.124200   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.124312   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.126385   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.126633   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.126667   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.126784   68429 provision.go:143] copyHostCerts
	I0815 18:36:43.126861   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:36:43.126884   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:36:43.126944   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:36:43.127052   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:36:43.127062   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:36:43.127090   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:36:43.127177   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:36:43.127187   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:36:43.127215   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:36:43.127286   68429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-423062 san=[127.0.0.1 192.168.61.7 default-k8s-diff-port-423062 localhost minikube]
	I0815 18:36:43.627396   68429 provision.go:177] copyRemoteCerts
	I0815 18:36:43.627460   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:36:43.627485   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.630025   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.630311   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.630340   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.630479   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.630670   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.630850   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.630976   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:43.712571   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:36:43.738904   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0815 18:36:43.764328   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 18:36:43.787211   68429 provision.go:87] duration metric: took 666.026026ms to configureAuth
	I0815 18:36:43.787241   68429 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:36:43.787467   68429 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:43.787567   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.789803   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.790210   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.790232   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.790432   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.790604   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.790729   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.790905   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.791021   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:43.791169   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:43.791187   68429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:36:44.067277   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:36:44.067307   68429 machine.go:96] duration metric: took 1.281926749s to provisionDockerMachine
	I0815 18:36:44.067322   68429 start.go:293] postStartSetup for "default-k8s-diff-port-423062" (driver="kvm2")
	I0815 18:36:44.067335   68429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:36:44.067360   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.067711   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:36:44.067749   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.070224   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.070543   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.070573   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.070740   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.070925   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.071079   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.071269   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.152784   68429 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:36:44.157264   68429 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:36:44.157291   68429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:36:44.157364   68429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:36:44.157461   68429 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:36:44.157580   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:36:44.168520   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:44.195223   68429 start.go:296] duration metric: took 127.886016ms for postStartSetup
	I0815 18:36:44.195268   68429 fix.go:56] duration metric: took 19.045962302s for fixHost
	I0815 18:36:44.195292   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.197711   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.198065   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.198090   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.198281   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.198438   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.198614   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.198768   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.198959   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:44.199154   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:44.199172   68429 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:36:44.301519   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747004.273982003
	
	I0815 18:36:44.301543   68429 fix.go:216] guest clock: 1723747004.273982003
	I0815 18:36:44.301553   68429 fix.go:229] Guest: 2024-08-15 18:36:44.273982003 +0000 UTC Remote: 2024-08-15 18:36:44.195273929 +0000 UTC m=+258.412094909 (delta=78.708074ms)
	I0815 18:36:44.301598   68429 fix.go:200] guest clock delta is within tolerance: 78.708074ms
	I0815 18:36:44.301606   68429 start.go:83] releasing machines lock for "default-k8s-diff-port-423062", held for 19.152336719s
	I0815 18:36:44.301638   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.301903   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:44.305012   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.305498   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.305524   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.305742   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306240   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306425   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306533   68429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:36:44.306595   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.306689   68429 ssh_runner.go:195] Run: cat /version.json
	I0815 18:36:44.306714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.309649   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.309838   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310098   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.310133   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310250   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.310267   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.310296   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310434   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.310457   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.310634   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.310654   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.310794   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.310798   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.310947   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.412125   68429 ssh_runner.go:195] Run: systemctl --version
	I0815 18:36:44.420070   68429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:36:44.566014   68429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:36:44.572209   68429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:36:44.572283   68429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:36:44.593041   68429 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:36:44.593067   68429 start.go:495] detecting cgroup driver to use...
	I0815 18:36:44.593145   68429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:36:44.613683   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:36:44.627766   68429 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:36:44.627851   68429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:36:44.641172   68429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:36:44.654952   68429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:36:44.778684   68429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:36:44.965548   68429 docker.go:233] disabling docker service ...
	I0815 18:36:44.965631   68429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:36:44.983153   68429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:36:44.999109   68429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:36:45.131097   68429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:36:45.270930   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:36:45.287846   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:36:45.309345   68429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:36:45.309407   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.320032   68429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:36:45.320092   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.331647   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.342499   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.353192   68429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:36:45.364163   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.381124   68429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.403692   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.415032   68429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:36:45.424798   68429 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:36:45.424859   68429 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:36:45.439077   68429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:36:45.448554   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:45.570697   68429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:36:45.719575   68429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:36:45.719655   68429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:36:45.724415   68429 start.go:563] Will wait 60s for crictl version
	I0815 18:36:45.724476   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:36:45.728443   68429 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:36:45.770935   68429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:36:45.771023   68429 ssh_runner.go:195] Run: crio --version
	I0815 18:36:45.799588   68429 ssh_runner.go:195] Run: crio --version
	I0815 18:36:45.830915   68429 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:36:44.327259   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .Start
	I0815 18:36:44.327431   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring networks are active...
	I0815 18:36:44.328116   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network default is active
	I0815 18:36:44.328601   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network mk-old-k8s-version-278865 is active
	I0815 18:36:44.329081   68713 main.go:141] libmachine: (old-k8s-version-278865) Getting domain xml...
	I0815 18:36:44.331888   68713 main.go:141] libmachine: (old-k8s-version-278865) Creating domain...
	I0815 18:36:45.633882   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting to get IP...
	I0815 18:36:45.634842   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.635216   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.635286   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.635206   69670 retry.go:31] will retry after 300.377534ms: waiting for machine to come up
	I0815 18:36:45.937793   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.938290   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.938312   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.938236   69670 retry.go:31] will retry after 282.311084ms: waiting for machine to come up
	I0815 18:36:46.222856   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.223327   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.223350   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.223283   69670 retry.go:31] will retry after 354.299649ms: waiting for machine to come up
	I0815 18:36:46.578770   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.579337   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.579360   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.579241   69670 retry.go:31] will retry after 382.947645ms: waiting for machine to come up
	I0815 18:36:46.964003   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.964911   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.964943   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.964824   69670 retry.go:31] will retry after 710.757442ms: waiting for machine to come up
	I0815 18:36:47.676738   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:47.677422   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:47.677450   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:47.677360   69670 retry.go:31] will retry after 588.944709ms: waiting for machine to come up
	I0815 18:36:45.957776   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:48.456345   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:45.832411   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:45.835145   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:45.835523   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:45.835553   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:45.835762   68429 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 18:36:45.840347   68429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:45.854348   68429 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-423062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:36:45.854471   68429 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:36:45.854527   68429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:45.899238   68429 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:36:45.899320   68429 ssh_runner.go:195] Run: which lz4
	I0815 18:36:45.903367   68429 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:36:45.907499   68429 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:36:45.907526   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:36:47.317850   68429 crio.go:462] duration metric: took 1.414524229s to copy over tarball
	I0815 18:36:47.317929   68429 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:36:49.443172   68429 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.125212316s)
	I0815 18:36:49.443206   68429 crio.go:469] duration metric: took 2.125324606s to extract the tarball
	I0815 18:36:49.443215   68429 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:36:49.483693   68429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:49.535588   68429 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:36:49.535617   68429 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:36:49.535627   68429 kubeadm.go:934] updating node { 192.168.61.7 8444 v1.31.0 crio true true} ...
	I0815 18:36:49.535753   68429 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-423062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:36:49.535843   68429 ssh_runner.go:195] Run: crio config
	I0815 18:36:49.587186   68429 cni.go:84] Creating CNI manager for ""
	I0815 18:36:49.587215   68429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:49.587232   68429 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:36:49.587257   68429 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.7 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-423062 NodeName:default-k8s-diff-port-423062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:36:49.587447   68429 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-423062"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:36:49.587520   68429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:36:49.598312   68429 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:36:49.598376   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:36:49.608382   68429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0815 18:36:49.624449   68429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:36:49.647224   68429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0815 18:36:49.664848   68429 ssh_runner.go:195] Run: grep 192.168.61.7	control-plane.minikube.internal$ /etc/hosts
	I0815 18:36:49.668582   68429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:49.680786   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:49.804940   68429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:49.826104   68429 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062 for IP: 192.168.61.7
	I0815 18:36:49.826130   68429 certs.go:194] generating shared ca certs ...
	I0815 18:36:49.826147   68429 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:49.826281   68429 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:36:49.826322   68429 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:36:49.826331   68429 certs.go:256] generating profile certs ...
	I0815 18:36:49.826403   68429 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.key
	I0815 18:36:49.826461   68429 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.key.534debab
	I0815 18:36:49.826528   68429 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.key
	I0815 18:36:49.826667   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:36:49.826713   68429 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:36:49.826725   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:36:49.826748   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:36:49.826777   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:36:49.826810   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:36:49.826868   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:49.827597   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:36:49.855678   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:36:49.891292   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:36:49.928612   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:36:49.961506   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 18:36:49.993955   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:36:50.019275   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:36:50.046773   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:36:50.074201   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:36:50.101491   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:36:50.125378   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:36:50.149974   68429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:36:50.166393   68429 ssh_runner.go:195] Run: openssl version
	I0815 18:36:50.172182   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:36:50.182755   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.187110   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.187155   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.192956   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:36:50.203680   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:36:50.214269   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.218876   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.218925   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.224463   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:36:50.234811   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:36:50.245585   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.250397   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.250446   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.256189   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:36:50.267342   68429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:36:50.272011   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:36:50.278217   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:36:50.284300   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:36:50.290402   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:36:50.296174   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:36:50.301957   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:36:50.307807   68429 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-423062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:36:50.307910   68429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:36:50.307973   68429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:50.359833   68429 cri.go:89] found id: ""
	I0815 18:36:50.359923   68429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:36:50.370306   68429 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:36:50.370324   68429 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:36:50.370379   68429 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:36:50.379585   68429 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:36:50.380510   68429 kubeconfig.go:125] found "default-k8s-diff-port-423062" server: "https://192.168.61.7:8444"
	I0815 18:36:50.384136   68429 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:36:50.393393   68429 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.7
	I0815 18:36:50.393428   68429 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:36:50.393441   68429 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:36:50.393494   68429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:50.428085   68429 cri.go:89] found id: ""
	I0815 18:36:50.428162   68429 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:36:50.444032   68429 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:36:50.454927   68429 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:36:50.454948   68429 kubeadm.go:157] found existing configuration files:
	
	I0815 18:36:50.455000   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0815 18:36:50.464733   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:36:50.464797   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:36:50.473973   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0815 18:36:50.482861   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:36:50.482910   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:36:50.492213   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0815 18:36:50.501173   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:36:50.501230   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:36:50.510299   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0815 18:36:50.519262   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:36:50.519308   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:36:50.528632   68429 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:36:50.537914   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:50.655230   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:48.268221   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:48.268790   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:48.268814   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:48.268736   69670 retry.go:31] will retry after 781.489196ms: waiting for machine to come up
	I0815 18:36:49.051824   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:49.052246   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:49.052277   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:49.052182   69670 retry.go:31] will retry after 1.393037007s: waiting for machine to come up
	I0815 18:36:50.446428   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:50.446860   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:50.446892   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:50.446800   69670 retry.go:31] will retry after 1.826779004s: waiting for machine to come up
	I0815 18:36:52.275716   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:52.276208   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:52.276231   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:52.276167   69670 retry.go:31] will retry after 1.746726312s: waiting for machine to come up
	I0815 18:36:50.458388   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:52.147996   68248 pod_ready.go:93] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:36:52.148026   68248 pod_ready.go:82] duration metric: took 12.698470185s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:52.148039   68248 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:54.153927   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:51.670903   68429 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.015612511s)
	I0815 18:36:51.670943   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:51.985806   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:52.069082   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:52.189200   68429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:36:52.189298   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:52.689767   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:53.189633   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:53.205099   68429 api_server.go:72] duration metric: took 1.015908263s to wait for apiserver process to appear ...
	I0815 18:36:53.205136   68429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:36:53.205162   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:53.205695   68429 api_server.go:269] stopped: https://192.168.61.7:8444/healthz: Get "https://192.168.61.7:8444/healthz": dial tcp 192.168.61.7:8444: connect: connection refused
	I0815 18:36:53.705285   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:55.721139   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:55.721177   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:55.721193   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:55.750790   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:55.750825   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:56.205675   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:56.212464   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:56.212509   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:56.705700   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:56.716232   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:56.716277   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:57.205663   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:57.211081   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0815 18:36:57.217736   68429 api_server.go:141] control plane version: v1.31.0
	I0815 18:36:57.217763   68429 api_server.go:131] duration metric: took 4.012620084s to wait for apiserver health ...
	I0815 18:36:57.217772   68429 cni.go:84] Creating CNI manager for ""
	I0815 18:36:57.217778   68429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:57.219455   68429 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:36:54.025067   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:54.025508   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:54.025535   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:54.025462   69670 retry.go:31] will retry after 2.693215306s: waiting for machine to come up
	I0815 18:36:56.721740   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:56.722139   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:56.722178   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:56.722070   69670 retry.go:31] will retry after 3.370623363s: waiting for machine to come up
	I0815 18:36:57.220672   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:36:57.241710   68429 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:36:57.262714   68429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:36:57.272766   68429 system_pods.go:59] 8 kube-system pods found
	I0815 18:36:57.272822   68429 system_pods.go:61] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:36:57.272836   68429 system_pods.go:61] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:36:57.272849   68429 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:36:57.272862   68429 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:36:57.272872   68429 system_pods.go:61] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:36:57.272887   68429 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:36:57.272896   68429 system_pods.go:61] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:36:57.272902   68429 system_pods.go:61] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:36:57.272913   68429 system_pods.go:74] duration metric: took 10.175415ms to wait for pod list to return data ...
	I0815 18:36:57.272924   68429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:36:57.276880   68429 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:36:57.276915   68429 node_conditions.go:123] node cpu capacity is 2
	I0815 18:36:57.276929   68429 node_conditions.go:105] duration metric: took 3.998879ms to run NodePressure ...
	I0815 18:36:57.276951   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:57.554251   68429 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:36:57.558062   68429 kubeadm.go:739] kubelet initialised
	I0815 18:36:57.558084   68429 kubeadm.go:740] duration metric: took 3.811943ms waiting for restarted kubelet to initialise ...
	I0815 18:36:57.558091   68429 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:57.562470   68429 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.567212   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.567232   68429 pod_ready.go:82] duration metric: took 4.742538ms for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.567240   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.567245   68429 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.571217   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.571237   68429 pod_ready.go:82] duration metric: took 3.984908ms for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.571247   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.571255   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.575456   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.575494   68429 pod_ready.go:82] duration metric: took 4.232215ms for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.575507   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.575515   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.665876   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.665902   68429 pod_ready.go:82] duration metric: took 90.37918ms for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.665914   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.665921   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.066377   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-proxy-bnxv7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.066402   68429 pod_ready.go:82] duration metric: took 400.475025ms for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.066411   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-proxy-bnxv7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.066426   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.465739   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.465767   68429 pod_ready.go:82] duration metric: took 399.331024ms for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.465779   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.465787   68429 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.866772   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.866798   68429 pod_ready.go:82] duration metric: took 401.001046ms for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.866809   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.866817   68429 pod_ready.go:39] duration metric: took 1.308717049s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:58.866835   68429 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:36:58.878274   68429 ops.go:34] apiserver oom_adj: -16
	I0815 18:36:58.878298   68429 kubeadm.go:597] duration metric: took 8.507965813s to restartPrimaryControlPlane
	I0815 18:36:58.878308   68429 kubeadm.go:394] duration metric: took 8.570508558s to StartCluster
	I0815 18:36:58.878327   68429 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:58.878499   68429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:36:58.879927   68429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:58.880213   68429 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:36:58.880262   68429 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:36:58.880339   68429 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-423062"
	I0815 18:36:58.880375   68429 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-423062"
	I0815 18:36:58.880374   68429 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-423062"
	W0815 18:36:58.880383   68429 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:36:58.880367   68429 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-423062"
	I0815 18:36:58.880403   68429 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-423062"
	W0815 18:36:58.880410   68429 addons.go:243] addon metrics-server should already be in state true
	I0815 18:36:58.880414   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.880422   68429 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:58.880428   68429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-423062"
	I0815 18:36:58.880434   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.880772   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880778   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880801   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.880820   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.880826   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880855   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.882047   68429 out.go:177] * Verifying Kubernetes components...
	I0815 18:36:58.883440   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:58.895575   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I0815 18:36:58.895577   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37567
	I0815 18:36:58.895739   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39491
	I0815 18:36:58.896031   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896063   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896121   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896511   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896529   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896612   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896631   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896749   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896768   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896917   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.896963   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.897099   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.897132   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.897483   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.897527   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.897535   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.897558   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.900773   68429 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-423062"
	W0815 18:36:58.900796   68429 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:36:58.900825   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.901206   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.901238   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.912877   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0815 18:36:58.912903   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37245
	I0815 18:36:58.913271   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.913344   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.913835   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.913845   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.913852   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.913862   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.914177   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.914218   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.914361   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.914408   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.916165   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.916601   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.918553   68429 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:36:58.918560   68429 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:36:56.154697   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:58.654414   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:58.919539   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44177
	I0815 18:36:58.919773   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:36:58.919790   68429 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:36:58.919809   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.919884   68429 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:36:58.919900   68429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:36:58.919916   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.919945   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.920330   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.920343   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.920777   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.921363   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.921401   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.923262   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.923629   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.923656   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.923684   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.924108   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.924256   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.924319   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.924337   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.924501   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.924564   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.924688   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:58.924773   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.924944   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.925266   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:58.938064   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38697
	I0815 18:36:58.938411   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.938762   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.938782   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.939057   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.939214   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.941134   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.941395   68429 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:36:58.941414   68429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:36:58.941436   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.943936   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.944331   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.944355   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.944594   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.944765   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.944900   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.944977   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:59.069466   68429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:59.090259   68429 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-423062" to be "Ready" ...
	I0815 18:36:59.203591   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:36:59.232676   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:36:59.232705   68429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:36:59.273079   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:36:59.287625   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:36:59.287653   68429 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:36:59.359798   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:36:59.359821   68429 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:36:59.406350   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:00.373429   68429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16980511s)
	I0815 18:37:00.373477   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373495   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373501   68429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.10037967s)
	I0815 18:37:00.373546   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373563   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373787   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.373805   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.373848   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.373852   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.373863   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.373866   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.373890   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373903   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373879   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373937   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.374313   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.374322   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.374326   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.374344   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.374355   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.379434   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.379450   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.379666   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.379679   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.389853   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.389872   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.390148   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.390152   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.390173   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.390181   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.390189   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.390396   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.390447   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.390461   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.390475   68429 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-423062"
	I0815 18:37:00.392530   68429 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 18:37:00.393703   68429 addons.go:510] duration metric: took 1.51344438s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 18:37:00.093896   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:00.094391   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:37:00.094453   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:37:00.094333   69670 retry.go:31] will retry after 2.855023319s: waiting for machine to come up
	I0815 18:37:04.297557   67936 start.go:364] duration metric: took 52.755115386s to acquireMachinesLock for "no-preload-599042"
	I0815 18:37:04.297614   67936 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:37:04.297639   67936 fix.go:54] fixHost starting: 
	I0815 18:37:04.298066   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:04.298096   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:04.317897   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42493
	I0815 18:37:04.318309   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:04.318797   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:04.318822   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:04.319191   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:04.319388   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:04.319543   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:04.320970   67936 fix.go:112] recreateIfNeeded on no-preload-599042: state=Stopped err=<nil>
	I0815 18:37:04.320994   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	W0815 18:37:04.321164   67936 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:37:04.322689   67936 out.go:177] * Restarting existing kvm2 VM for "no-preload-599042" ...
	I0815 18:37:00.654833   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:03.154235   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:02.950449   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950903   68713 main.go:141] libmachine: (old-k8s-version-278865) Found IP for machine: 192.168.39.89
	I0815 18:37:02.950931   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has current primary IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950941   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserving static IP address...
	I0815 18:37:02.951319   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.951356   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | skip adding static IP to network mk-old-k8s-version-278865 - found existing host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"}
	I0815 18:37:02.951376   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserved static IP address: 192.168.39.89
	I0815 18:37:02.951393   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting for SSH to be available...
	I0815 18:37:02.951424   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Getting to WaitForSSH function...
	I0815 18:37:02.953498   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.953804   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953927   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH client type: external
	I0815 18:37:02.953957   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa (-rw-------)
	I0815 18:37:02.953989   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:37:02.954001   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | About to run SSH command:
	I0815 18:37:02.954009   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | exit 0
	I0815 18:37:03.076431   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | SSH cmd err, output: <nil>: 
	I0815 18:37:03.076748   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetConfigRaw
	I0815 18:37:03.077325   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.079733   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080100   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.080132   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080332   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:37:03.080537   68713 machine.go:93] provisionDockerMachine start ...
	I0815 18:37:03.080554   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:03.080717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.082778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083140   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.083168   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083331   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.083482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083612   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083730   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.083881   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.084067   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.084078   68713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:37:03.188779   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:37:03.188813   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189045   68713 buildroot.go:166] provisioning hostname "old-k8s-version-278865"
	I0815 18:37:03.189069   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189284   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.191858   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192171   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.192192   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192328   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.192533   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192676   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192822   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.193015   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.193180   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.193192   68713 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-278865 && echo "old-k8s-version-278865" | sudo tee /etc/hostname
	I0815 18:37:03.313099   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-278865
	
	I0815 18:37:03.313129   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.315840   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316196   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.316226   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316378   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.316608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316760   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316885   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.317001   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.317184   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.317207   68713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-278865' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-278865/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-278865' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:37:03.429897   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:37:03.429934   68713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:37:03.429962   68713 buildroot.go:174] setting up certificates
	I0815 18:37:03.429972   68713 provision.go:84] configureAuth start
	I0815 18:37:03.429983   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.430274   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.432724   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433053   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.433083   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433212   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.435181   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435514   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.435543   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435657   68713 provision.go:143] copyHostCerts
	I0815 18:37:03.435715   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:37:03.435736   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:37:03.435804   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:37:03.435919   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:37:03.435929   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:37:03.435959   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:37:03.436045   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:37:03.436055   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:37:03.436082   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:37:03.436170   68713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-278865 san=[127.0.0.1 192.168.39.89 localhost minikube old-k8s-version-278865]
	I0815 18:37:03.604924   68713 provision.go:177] copyRemoteCerts
	I0815 18:37:03.604979   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:37:03.605003   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.607328   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607616   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.607634   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607821   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.608016   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.608171   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.608429   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:03.690560   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:37:03.714632   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 18:37:03.737805   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:37:03.762338   68713 provision.go:87] duration metric: took 332.353741ms to configureAuth
	I0815 18:37:03.762371   68713 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:37:03.762543   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:37:03.762608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.765626   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.765988   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.766018   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.766211   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.766380   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766574   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766712   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.766897   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.767053   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.767069   68713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:37:04.050635   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:37:04.050663   68713 machine.go:96] duration metric: took 970.113556ms to provisionDockerMachine
	I0815 18:37:04.050674   68713 start.go:293] postStartSetup for "old-k8s-version-278865" (driver="kvm2")
	I0815 18:37:04.050685   68713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:37:04.050717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.051048   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:37:04.051081   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.053709   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054095   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.054124   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054432   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.054622   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.054774   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.054914   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.139381   68713 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:37:04.145097   68713 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:37:04.145124   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:37:04.145201   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:37:04.145298   68713 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:37:04.145421   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:37:04.156166   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:04.181562   68713 start.go:296] duration metric: took 130.872499ms for postStartSetup
	I0815 18:37:04.181605   68713 fix.go:56] duration metric: took 19.879821037s for fixHost
	I0815 18:37:04.181629   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.184268   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184652   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.184682   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184917   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.185151   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185345   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185502   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.185677   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:04.185925   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:04.185938   68713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:37:04.297391   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747024.271483326
	
	I0815 18:37:04.297413   68713 fix.go:216] guest clock: 1723747024.271483326
	I0815 18:37:04.297423   68713 fix.go:229] Guest: 2024-08-15 18:37:04.271483326 +0000 UTC Remote: 2024-08-15 18:37:04.181610291 +0000 UTC m=+251.426055371 (delta=89.873035ms)
	I0815 18:37:04.297448   68713 fix.go:200] guest clock delta is within tolerance: 89.873035ms
	I0815 18:37:04.297455   68713 start.go:83] releasing machines lock for "old-k8s-version-278865", held for 19.99571173s
	I0815 18:37:04.297504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.297818   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:04.300970   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301425   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.301455   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301609   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302194   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302404   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302495   68713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:37:04.302545   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.302679   68713 ssh_runner.go:195] Run: cat /version.json
	I0815 18:37:04.302705   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.305673   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.305903   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306066   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306092   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306273   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306301   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306337   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306537   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306657   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306664   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306827   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306834   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.307009   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.409319   68713 ssh_runner.go:195] Run: systemctl --version
	I0815 18:37:04.415576   68713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:37:04.565772   68713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:37:04.571909   68713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:37:04.571996   68713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:37:04.588400   68713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:37:04.588427   68713 start.go:495] detecting cgroup driver to use...
	I0815 18:37:04.588528   68713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:37:04.604253   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:37:04.619003   68713 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:37:04.619051   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:37:04.632530   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:37:04.646080   68713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:37:04.763855   68713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:37:04.922470   68713 docker.go:233] disabling docker service ...
	I0815 18:37:04.922566   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:37:04.937301   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:37:04.950721   68713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:37:05.079767   68713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:37:05.210207   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:37:05.225569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:37:05.247998   68713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 18:37:05.248070   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.262851   68713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:37:05.262924   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.274489   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.285901   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.298749   68713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:37:05.310052   68713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:37:05.320992   68713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:37:05.321073   68713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:37:05.340323   68713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:37:05.354069   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:05.483573   68713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:37:05.647020   68713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:37:05.647094   68713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:37:05.653850   68713 start.go:563] Will wait 60s for crictl version
	I0815 18:37:05.653924   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:05.658476   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:37:05.697818   68713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:37:05.697907   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.724931   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.755831   68713 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 18:37:01.094934   68429 node_ready.go:53] node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:37:03.594364   68429 node_ready.go:53] node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:37:05.756950   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:05.759791   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760188   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:05.760220   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760468   68713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:37:05.764753   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:05.777462   68713 kubeadm.go:883] updating cluster {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:37:05.777614   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:37:05.777679   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:05.848895   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:05.848967   68713 ssh_runner.go:195] Run: which lz4
	I0815 18:37:05.853103   68713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:37:05.858012   68713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:37:05.858046   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 18:37:07.520567   68713 crio.go:462] duration metric: took 1.667489785s to copy over tarball
	I0815 18:37:07.520642   68713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:37:04.324093   67936 main.go:141] libmachine: (no-preload-599042) Calling .Start
	I0815 18:37:04.324263   67936 main.go:141] libmachine: (no-preload-599042) Ensuring networks are active...
	I0815 18:37:04.325099   67936 main.go:141] libmachine: (no-preload-599042) Ensuring network default is active
	I0815 18:37:04.325778   67936 main.go:141] libmachine: (no-preload-599042) Ensuring network mk-no-preload-599042 is active
	I0815 18:37:04.326007   67936 main.go:141] libmachine: (no-preload-599042) Getting domain xml...
	I0815 18:37:04.328184   67936 main.go:141] libmachine: (no-preload-599042) Creating domain...
	I0815 18:37:05.626206   67936 main.go:141] libmachine: (no-preload-599042) Waiting to get IP...
	I0815 18:37:05.627374   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:05.627877   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:05.627935   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:05.627844   69876 retry.go:31] will retry after 199.774188ms: waiting for machine to come up
	I0815 18:37:05.829673   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:05.830213   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:05.830240   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:05.830170   69876 retry.go:31] will retry after 255.850483ms: waiting for machine to come up
	I0815 18:37:06.087766   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:06.088378   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:06.088405   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:06.088330   69876 retry.go:31] will retry after 351.231421ms: waiting for machine to come up
	I0815 18:37:06.440937   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:06.441597   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:06.441626   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:06.441572   69876 retry.go:31] will retry after 602.620924ms: waiting for machine to come up
	I0815 18:37:07.046269   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:07.046745   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:07.046769   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:07.046712   69876 retry.go:31] will retry after 578.450642ms: waiting for machine to come up
	I0815 18:37:07.627330   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:07.627832   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:07.627859   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:07.627791   69876 retry.go:31] will retry after 731.331176ms: waiting for machine to come up
	I0815 18:37:08.361310   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:08.361746   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:08.361776   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:08.361706   69876 retry.go:31] will retry after 1.089237688s: waiting for machine to come up
	I0815 18:37:05.157378   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:07.162990   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:09.654672   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:06.093822   68429 node_ready.go:49] node "default-k8s-diff-port-423062" has status "Ready":"True"
	I0815 18:37:06.093853   68429 node_ready.go:38] duration metric: took 7.003558244s for node "default-k8s-diff-port-423062" to be "Ready" ...
	I0815 18:37:06.093867   68429 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:06.103462   68429 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.111214   68429 pod_ready.go:93] pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:06.111235   68429 pod_ready.go:82] duration metric: took 7.746382ms for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.111244   68429 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.117713   68429 pod_ready.go:93] pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:06.117739   68429 pod_ready.go:82] duration metric: took 6.487608ms for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.117750   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:08.126216   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:10.128095   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:10.534169   68713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.013498464s)
	I0815 18:37:10.534194   68713 crio.go:469] duration metric: took 3.013602868s to extract the tarball
	I0815 18:37:10.534201   68713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:37:10.578998   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:10.619043   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:10.619146   68713 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:37:10.619246   68713 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.619247   68713 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.619278   68713 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 18:37:10.619275   68713 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.619291   68713 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.619304   68713 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.619322   68713 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.619405   68713 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621367   68713 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.621384   68713 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 18:37:10.621468   68713 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.621500   68713 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.621596   68713 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.621646   68713 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621706   68713 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.621897   68713 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.798617   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.828530   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 18:37:10.859528   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.918714   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.977028   68713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 18:37:10.977073   68713 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.977119   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:10.980573   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.985503   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.990642   68713 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 18:37:10.990684   68713 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 18:37:10.990733   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.000388   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.007526   68713 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 18:37:11.007589   68713 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.007642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.008543   68713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 18:37:11.008581   68713 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.008621   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.008642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077224   68713 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 18:37:11.077269   68713 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077228   68713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 18:37:11.077347   68713 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.077371   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111299   68713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 18:37:11.111376   68713 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.111387   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.111421   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111471   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.156942   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.156944   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.156997   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.263355   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.263448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.263455   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.263544   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.291407   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.312626   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.334606   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.427937   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.433739   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:11.435371   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.439448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.439541   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 18:37:11.450901   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.477906   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.520009   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 18:37:11.572349   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 18:37:11.686243   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 18:37:11.686295   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 18:37:11.686325   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 18:37:11.686378   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 18:37:11.686420   68713 cache_images.go:92] duration metric: took 1.067250234s to LoadCachedImages
	W0815 18:37:11.686494   68713 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0815 18:37:11.686508   68713 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.20.0 crio true true} ...
	I0815 18:37:11.686620   68713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-278865 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:37:11.686693   68713 ssh_runner.go:195] Run: crio config
	I0815 18:37:11.736781   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:37:11.736808   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:11.736824   68713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:37:11.736851   68713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-278865 NodeName:old-k8s-version-278865 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 18:37:11.737039   68713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-278865"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:37:11.737120   68713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 18:37:11.747511   68713 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:37:11.747585   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:37:11.757850   68713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 18:37:11.775982   68713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:37:11.792938   68713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 18:37:11.811576   68713 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I0815 18:37:11.815708   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:11.829992   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:11.983884   68713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:12.002603   68713 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865 for IP: 192.168.39.89
	I0815 18:37:12.002632   68713 certs.go:194] generating shared ca certs ...
	I0815 18:37:12.002682   68713 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.002867   68713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:37:12.002926   68713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:37:12.002942   68713 certs.go:256] generating profile certs ...
	I0815 18:37:12.025160   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.key
	I0815 18:37:12.025296   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a
	I0815 18:37:12.025351   68713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key
	I0815 18:37:12.025516   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:37:12.025578   68713 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:37:12.025591   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:37:12.025627   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:37:12.025661   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:37:12.025691   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:37:12.025746   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:12.026614   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:37:12.066771   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:37:12.109649   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:37:12.176744   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:37:12.207990   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 18:37:12.244999   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:37:12.282338   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:37:12.308761   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:37:12.332316   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:37:12.355977   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:37:12.379169   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:37:12.405472   68713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:37:12.424110   68713 ssh_runner.go:195] Run: openssl version
	I0815 18:37:12.430231   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:37:12.441531   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.445971   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.446061   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.452134   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:37:12.466809   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:37:12.478211   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482659   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482708   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.490225   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:37:12.504908   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:37:12.516825   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521854   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521911   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.527884   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:37:12.539398   68713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:37:12.544010   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:37:12.549918   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:37:12.555714   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:37:12.561895   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:37:12.567736   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:37:12.573664   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:37:12.579510   68713 kubeadm.go:392] StartCluster: {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:37:12.579627   68713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:37:12.579688   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.621503   68713 cri.go:89] found id: ""
	I0815 18:37:12.621576   68713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:37:12.632722   68713 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:37:12.632746   68713 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:37:12.632796   68713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:37:12.643192   68713 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:37:12.644607   68713 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-278865" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:37:12.645629   68713 kubeconfig.go:62] /home/jenkins/minikube-integration/19450-13013/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-278865" cluster setting kubeconfig missing "old-k8s-version-278865" context setting]
	I0815 18:37:12.647073   68713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.653052   68713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:37:12.665777   68713 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.89
	I0815 18:37:12.665808   68713 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:37:12.665821   68713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:37:12.665872   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.713574   68713 cri.go:89] found id: ""
	I0815 18:37:12.713641   68713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:37:12.731459   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:37:12.741769   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:37:12.741789   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:37:12.741833   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:37:12.750990   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:37:12.751049   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:37:12.761621   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:37:12.771204   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:37:12.771261   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:37:12.782012   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:37:09.452971   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:09.453451   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:09.453494   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:09.453393   69876 retry.go:31] will retry after 1.35461204s: waiting for machine to come up
	I0815 18:37:10.809664   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:10.810127   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:10.810158   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:10.810065   69876 retry.go:31] will retry after 1.709820883s: waiting for machine to come up
	I0815 18:37:12.521458   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:12.521988   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:12.522016   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:12.521930   69876 retry.go:31] will retry after 1.401971708s: waiting for machine to come up
	I0815 18:37:13.925401   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:13.925868   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:13.925898   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:13.925824   69876 retry.go:31] will retry after 2.768002946s: waiting for machine to come up
	I0815 18:37:11.655451   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:14.154561   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:12.400960   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:13.128357   68429 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.128379   68429 pod_ready.go:82] duration metric: took 7.010621879s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.128389   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.136617   68429 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.136638   68429 pod_ready.go:82] duration metric: took 8.242471ms for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.136648   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.143530   68429 pod_ready.go:93] pod "kube-proxy-bnxv7" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.143551   68429 pod_ready.go:82] duration metric: took 6.895931ms for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.143563   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.151691   68429 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.151721   68429 pod_ready.go:82] duration metric: took 8.149821ms for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.151735   68429 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:15.158172   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:12.791928   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:37:12.791994   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:37:12.801858   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:37:12.811023   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:37:12.811083   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:37:12.822189   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:37:12.834293   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:12.974325   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.452192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.690442   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.798270   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.900783   68713 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:37:13.900877   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.401954   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.901809   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.401755   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.901010   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.401794   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.901149   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:17.401599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.694999   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:16.695488   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:16.695506   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:16.695430   69876 retry.go:31] will retry after 2.308386075s: waiting for machine to come up
	I0815 18:37:16.154692   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:18.653763   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:17.159197   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:19.159442   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:17.901511   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.401720   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.900976   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.401223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.901522   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.901573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:22.401279   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.005581   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:19.005979   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:19.006008   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:19.005930   69876 retry.go:31] will retry after 2.758801207s: waiting for machine to come up
	I0815 18:37:21.766860   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.767286   67936 main.go:141] libmachine: (no-preload-599042) Found IP for machine: 192.168.72.14
	I0815 18:37:21.767303   67936 main.go:141] libmachine: (no-preload-599042) Reserving static IP address...
	I0815 18:37:21.767314   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has current primary IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.767722   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "no-preload-599042", mac: "52:54:00:d1:54:6d", ip: "192.168.72.14"} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.767745   67936 main.go:141] libmachine: (no-preload-599042) Reserved static IP address: 192.168.72.14
	I0815 18:37:21.767757   67936 main.go:141] libmachine: (no-preload-599042) DBG | skip adding static IP to network mk-no-preload-599042 - found existing host DHCP lease matching {name: "no-preload-599042", mac: "52:54:00:d1:54:6d", ip: "192.168.72.14"}
	I0815 18:37:21.767768   67936 main.go:141] libmachine: (no-preload-599042) DBG | Getting to WaitForSSH function...
	I0815 18:37:21.767780   67936 main.go:141] libmachine: (no-preload-599042) Waiting for SSH to be available...
	I0815 18:37:21.769674   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.769950   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.769973   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.770072   67936 main.go:141] libmachine: (no-preload-599042) DBG | Using SSH client type: external
	I0815 18:37:21.770103   67936 main.go:141] libmachine: (no-preload-599042) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa (-rw-------)
	I0815 18:37:21.770134   67936 main.go:141] libmachine: (no-preload-599042) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:37:21.770147   67936 main.go:141] libmachine: (no-preload-599042) DBG | About to run SSH command:
	I0815 18:37:21.770162   67936 main.go:141] libmachine: (no-preload-599042) DBG | exit 0
	I0815 18:37:21.888536   67936 main.go:141] libmachine: (no-preload-599042) DBG | SSH cmd err, output: <nil>: 
	I0815 18:37:21.888900   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetConfigRaw
	I0815 18:37:21.889541   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:21.892351   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.892730   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.892760   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.892976   67936 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/config.json ...
	I0815 18:37:21.893181   67936 machine.go:93] provisionDockerMachine start ...
	I0815 18:37:21.893203   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:21.893404   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:21.895471   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.895774   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.895812   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.895967   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:21.896153   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.896334   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.896522   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:21.896697   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:21.896872   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:21.896884   67936 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:37:21.992598   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:37:21.992622   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:21.992856   67936 buildroot.go:166] provisioning hostname "no-preload-599042"
	I0815 18:37:21.992884   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:21.993095   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:21.995586   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.995902   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.995930   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.996051   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:21.996239   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.996375   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.996538   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:21.996691   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:21.996869   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:21.996884   67936 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-599042 && echo "no-preload-599042" | sudo tee /etc/hostname
	I0815 18:37:22.106513   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-599042
	
	I0815 18:37:22.106553   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.109655   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.110111   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.110143   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.110362   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.110548   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.110718   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.110838   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.110970   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.111141   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.111162   67936 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-599042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-599042/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-599042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:37:22.221858   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:37:22.221898   67936 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:37:22.221924   67936 buildroot.go:174] setting up certificates
	I0815 18:37:22.221938   67936 provision.go:84] configureAuth start
	I0815 18:37:22.221956   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:22.222278   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:22.225058   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.225374   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.225410   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.225544   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.227539   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.227885   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.227929   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.228052   67936 provision.go:143] copyHostCerts
	I0815 18:37:22.228111   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:37:22.228126   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:37:22.228190   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:37:22.228273   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:37:22.228282   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:37:22.228301   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:37:22.228352   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:37:22.228359   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:37:22.228375   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:37:22.228428   67936 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.no-preload-599042 san=[127.0.0.1 192.168.72.14 localhost minikube no-preload-599042]
	I0815 18:37:22.383520   67936 provision.go:177] copyRemoteCerts
	I0815 18:37:22.383578   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:37:22.383601   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.386048   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.386303   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.386338   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.386566   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.386722   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.386894   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.387036   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:22.470828   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:37:22.494929   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:37:22.519545   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 18:37:22.544417   67936 provision.go:87] duration metric: took 322.465732ms to configureAuth
	I0815 18:37:22.544442   67936 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:37:22.544661   67936 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:37:22.544736   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.547284   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.547610   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.547641   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.547876   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.548076   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.548271   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.548413   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.548594   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.548795   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.548818   67936 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:37:22.803896   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:37:22.803924   67936 machine.go:96] duration metric: took 910.728961ms to provisionDockerMachine
	I0815 18:37:22.803935   67936 start.go:293] postStartSetup for "no-preload-599042" (driver="kvm2")
	I0815 18:37:22.803945   67936 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:37:22.803959   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:22.804274   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:37:22.804322   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.807041   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.807437   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.807467   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.807570   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.807747   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.807906   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.808002   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:22.887667   67936 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:37:22.892368   67936 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:37:22.892393   67936 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:37:22.892480   67936 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:37:22.892588   67936 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:37:22.892681   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:37:22.901987   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:22.927782   67936 start.go:296] duration metric: took 123.834401ms for postStartSetup
	I0815 18:37:22.927823   67936 fix.go:56] duration metric: took 18.630196933s for fixHost
	I0815 18:37:22.927848   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.930378   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.930728   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.930755   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.930868   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.931043   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.931226   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.931386   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.931538   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.931705   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.931718   67936 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:37:23.029393   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747042.997661196
	
	I0815 18:37:23.029423   67936 fix.go:216] guest clock: 1723747042.997661196
	I0815 18:37:23.029433   67936 fix.go:229] Guest: 2024-08-15 18:37:22.997661196 +0000 UTC Remote: 2024-08-15 18:37:22.927828036 +0000 UTC m=+353.975665928 (delta=69.83316ms)
	I0815 18:37:23.029455   67936 fix.go:200] guest clock delta is within tolerance: 69.83316ms
	I0815 18:37:23.029465   67936 start.go:83] releasing machines lock for "no-preload-599042", held for 18.731874864s
	I0815 18:37:23.029491   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.029730   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:23.031885   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.032242   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.032261   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.032449   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.032908   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.033062   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.033149   67936 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:37:23.033197   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:23.033303   67936 ssh_runner.go:195] Run: cat /version.json
	I0815 18:37:23.033322   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:23.035943   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.035987   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036327   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.036433   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.036463   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036482   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036657   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:23.036836   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:23.036855   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:23.036966   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:23.037039   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:23.037119   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:23.037183   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:23.037242   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:23.117399   67936 ssh_runner.go:195] Run: systemctl --version
	I0815 18:37:23.138614   67936 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:37:23.287862   67936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:37:23.293943   67936 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:37:23.294013   67936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:37:23.310957   67936 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:37:23.310987   67936 start.go:495] detecting cgroup driver to use...
	I0815 18:37:23.311067   67936 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:37:23.326641   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:37:23.340650   67936 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:37:23.340708   67936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:37:23.355401   67936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:37:23.369033   67936 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:37:23.480891   67936 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:37:23.629690   67936 docker.go:233] disabling docker service ...
	I0815 18:37:23.629782   67936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:37:23.644372   67936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:37:23.658312   67936 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:37:23.779999   67936 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:37:23.902630   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:37:23.917453   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:37:23.935696   67936 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:37:23.935749   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.946031   67936 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:37:23.946106   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.956639   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.967148   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.978049   67936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:37:23.989000   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.999290   67936 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:24.017002   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:24.027432   67936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:37:24.036714   67936 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:37:24.036770   67936 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:37:24.048956   67936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:37:24.058269   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:24.173548   67936 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:37:24.316383   67936 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:37:24.316462   67936 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:37:24.321726   67936 start.go:563] Will wait 60s for crictl version
	I0815 18:37:24.321803   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.325718   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:37:24.362995   67936 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:37:24.363099   67936 ssh_runner.go:195] Run: crio --version
	I0815 18:37:24.392678   67936 ssh_runner.go:195] Run: crio --version
	I0815 18:37:24.424128   67936 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:37:20.654186   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:23.154893   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:21.658499   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:24.159865   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:22.901608   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.401519   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.901287   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.401831   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.901547   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.401220   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.901109   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.401441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.901515   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:27.401258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.425451   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:24.428263   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:24.428631   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:24.428656   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:24.428833   67936 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 18:37:24.433343   67936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:24.446011   67936 kubeadm.go:883] updating cluster {Name:no-preload-599042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:37:24.446123   67936 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:37:24.446168   67936 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:24.484321   67936 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:37:24.484346   67936 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:37:24.484417   67936 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:24.484429   67936 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.484444   67936 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.484470   67936 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.484472   67936 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.484581   67936 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.484583   67936 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 18:37:24.484585   67936 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.485836   67936 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.485844   67936 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 18:37:24.485852   67936 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.485846   67936 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.485836   67936 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.485837   67936 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.485846   67936 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:24.485906   67936 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.646217   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.653405   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.658441   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.662835   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.662858   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.681979   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.715361   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0815 18:37:24.722352   67936 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0815 18:37:24.722391   67936 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.722450   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.787439   67936 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0815 18:37:24.787486   67936 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.787530   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.810570   67936 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0815 18:37:24.810606   67936 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0815 18:37:24.810612   67936 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.810630   67936 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.810666   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.810667   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.841566   67936 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0815 18:37:24.841617   67936 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.841669   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.841698   67936 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0815 18:37:24.841743   67936 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.841800   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.950875   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.950918   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.950933   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.950989   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.951004   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.951052   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.079551   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:25.079601   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:25.079634   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:25.084852   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:25.084874   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.084910   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:25.216095   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:25.216235   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:25.216308   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:25.216384   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:25.216400   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.216431   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:25.336055   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 18:37:25.336126   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 18:37:25.336180   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.336222   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:25.336181   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 18:37:25.336320   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:25.352527   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 18:37:25.352566   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 18:37:25.352592   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0815 18:37:25.352639   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:25.352650   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:25.352702   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:25.355747   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0815 18:37:25.355764   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.355769   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0815 18:37:25.355797   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.355806   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0815 18:37:25.363222   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0815 18:37:25.363257   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0815 18:37:25.363435   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0815 18:37:25.476601   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:28.142118   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.786287506s)
	I0815 18:37:28.142134   67936 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.665496935s)
	I0815 18:37:28.142146   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0815 18:37:28.142177   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:28.142190   67936 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0815 18:37:28.142220   67936 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:28.142244   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:28.142259   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:25.155516   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:27.156071   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:29.157389   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:26.658491   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:28.659080   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:27.901777   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.401103   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.901746   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.401521   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.901691   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.401326   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.901672   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.401534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.901013   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:32.401696   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.598348   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.456076001s)
	I0815 18:37:29.598380   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0815 18:37:29.598404   67936 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:29.598407   67936 ssh_runner.go:235] Completed: which crictl: (1.456124508s)
	I0815 18:37:29.598451   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:29.598474   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:31.495864   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.897383444s)
	I0815 18:37:31.495897   67936 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.897403663s)
	I0815 18:37:31.495902   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0815 18:37:31.495931   67936 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:31.495968   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:31.495968   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:31.657799   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:34.156377   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:31.158308   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:33.159177   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:35.668218   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:32.901441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.901095   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.401705   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.901020   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.401019   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.901094   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.400952   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.901717   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:37.401701   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.526372   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.030374686s)
	I0815 18:37:35.526410   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0815 18:37:35.526422   67936 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.030343547s)
	I0815 18:37:35.526438   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:35.526482   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:35.526483   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:35.570806   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 18:37:35.570906   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:37.500059   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.973499408s)
	I0815 18:37:37.500098   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0815 18:37:37.500120   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:37.500072   67936 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.929150036s)
	I0815 18:37:37.500208   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0815 18:37:37.500161   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:36.157239   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:38.656856   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:38.158685   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:40.158728   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:37.901353   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.401426   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.901599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.401173   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.901593   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.401758   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.401698   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:42.401409   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.563532   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.063281797s)
	I0815 18:37:39.563562   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0815 18:37:39.563595   67936 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:39.563642   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:40.208180   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 18:37:40.208232   67936 cache_images.go:123] Successfully loaded all cached images
	I0815 18:37:40.208240   67936 cache_images.go:92] duration metric: took 15.723882738s to LoadCachedImages
	I0815 18:37:40.208252   67936 kubeadm.go:934] updating node { 192.168.72.14 8443 v1.31.0 crio true true} ...
	I0815 18:37:40.208416   67936 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-599042 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:37:40.208544   67936 ssh_runner.go:195] Run: crio config
	I0815 18:37:40.261526   67936 cni.go:84] Creating CNI manager for ""
	I0815 18:37:40.261545   67936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:40.261552   67936 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:37:40.261572   67936 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.14 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-599042 NodeName:no-preload-599042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:37:40.261688   67936 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-599042"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:37:40.261742   67936 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:37:40.271844   67936 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:37:40.271921   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:37:40.280957   67936 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0815 18:37:40.297378   67936 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:37:40.313215   67936 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0815 18:37:40.329640   67936 ssh_runner.go:195] Run: grep 192.168.72.14	control-plane.minikube.internal$ /etc/hosts
	I0815 18:37:40.333331   67936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:40.344805   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:40.457352   67936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:40.475219   67936 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042 for IP: 192.168.72.14
	I0815 18:37:40.475238   67936 certs.go:194] generating shared ca certs ...
	I0815 18:37:40.475252   67936 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:40.475416   67936 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:37:40.475475   67936 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:37:40.475489   67936 certs.go:256] generating profile certs ...
	I0815 18:37:40.475591   67936 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.key
	I0815 18:37:40.475670   67936 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.key.15ba6898
	I0815 18:37:40.475714   67936 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.key
	I0815 18:37:40.475865   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:37:40.475904   67936 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:37:40.475917   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:37:40.475950   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:37:40.475978   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:37:40.476012   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:37:40.476069   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:40.476738   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:37:40.513554   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:37:40.549095   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:37:40.578010   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:37:40.612637   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 18:37:40.639974   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 18:37:40.672937   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:37:40.696889   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 18:37:40.721258   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:37:40.744104   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:37:40.766463   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:37:40.788628   67936 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:37:40.805346   67936 ssh_runner.go:195] Run: openssl version
	I0815 18:37:40.811193   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:37:40.822610   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.826918   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.826969   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.832544   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:37:40.843338   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:37:40.854032   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.858512   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.858563   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.864247   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:37:40.874724   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:37:40.885538   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.889849   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.889899   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.895258   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:37:40.906841   67936 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:37:40.911629   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:37:40.918085   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:37:40.924194   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:37:40.930009   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:37:40.935634   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:37:40.941168   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:37:40.946761   67936 kubeadm.go:392] StartCluster: {Name:no-preload-599042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:37:40.946836   67936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:37:40.946874   67936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:40.990733   67936 cri.go:89] found id: ""
	I0815 18:37:40.990808   67936 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:37:41.002969   67936 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:37:41.002988   67936 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:37:41.003041   67936 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:37:41.013722   67936 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:37:41.015079   67936 kubeconfig.go:125] found "no-preload-599042" server: "https://192.168.72.14:8443"
	I0815 18:37:41.017905   67936 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:37:41.029240   67936 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.14
	I0815 18:37:41.029271   67936 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:37:41.029284   67936 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:37:41.029326   67936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:41.064689   67936 cri.go:89] found id: ""
	I0815 18:37:41.064754   67936 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:37:41.085195   67936 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:37:41.096355   67936 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:37:41.096375   67936 kubeadm.go:157] found existing configuration files:
	
	I0815 18:37:41.096425   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:37:41.106887   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:37:41.106941   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:37:41.117599   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:37:41.127956   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:37:41.128020   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:37:41.137384   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:37:41.146075   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:37:41.146122   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:37:41.156417   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:37:41.165287   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:37:41.165325   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:37:41.174245   67936 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:37:41.183335   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:41.314804   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.422591   67936 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.107749325s)
	I0815 18:37:42.422628   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.642065   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.710265   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.791233   67936 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:37:42.791334   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.291538   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.791682   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.831611   67936 api_server.go:72] duration metric: took 1.040390925s to wait for apiserver process to appear ...
	I0815 18:37:43.831641   67936 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:37:43.831662   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:43.832110   67936 api_server.go:269] stopped: https://192.168.72.14:8443/healthz: Get "https://192.168.72.14:8443/healthz": dial tcp 192.168.72.14:8443: connect: connection refused
	I0815 18:37:41.154701   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:43.655756   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:42.661385   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:45.158918   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:42.901106   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.401146   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.901869   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.401483   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.901302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.401505   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.901504   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.401025   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.901713   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:47.401588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.332554   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.112640   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:37:47.112668   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:37:47.112681   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.244211   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.244246   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:47.332375   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.339129   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.339153   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:47.831731   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.836308   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.836330   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:48.331914   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:48.336314   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:48.336347   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:48.831862   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:48.836012   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 200:
	ok
	I0815 18:37:48.842971   67936 api_server.go:141] control plane version: v1.31.0
	I0815 18:37:48.842996   67936 api_server.go:131] duration metric: took 5.011346791s to wait for apiserver health ...
	I0815 18:37:48.843008   67936 cni.go:84] Creating CNI manager for ""
	I0815 18:37:48.843015   67936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:48.844939   67936 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:37:48.846262   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:37:48.857335   67936 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:37:48.876370   67936 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:37:48.886582   67936 system_pods.go:59] 8 kube-system pods found
	I0815 18:37:48.886628   67936 system_pods.go:61] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:37:48.886640   67936 system_pods.go:61] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:37:48.886653   67936 system_pods.go:61] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:37:48.886666   67936 system_pods.go:61] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:37:48.886679   67936 system_pods.go:61] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 18:37:48.886691   67936 system_pods.go:61] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:37:48.886701   67936 system_pods.go:61] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:37:48.886711   67936 system_pods.go:61] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 18:37:48.886722   67936 system_pods.go:74] duration metric: took 10.329234ms to wait for pod list to return data ...
	I0815 18:37:48.886736   67936 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:37:48.890525   67936 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:37:48.890560   67936 node_conditions.go:123] node cpu capacity is 2
	I0815 18:37:48.890571   67936 node_conditions.go:105] duration metric: took 3.828616ms to run NodePressure ...
	I0815 18:37:48.890590   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:46.155548   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:48.655549   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:49.183845   67936 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:37:49.188602   67936 kubeadm.go:739] kubelet initialised
	I0815 18:37:49.188629   67936 kubeadm.go:740] duration metric: took 4.755371ms waiting for restarted kubelet to initialise ...
	I0815 18:37:49.188639   67936 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:49.193101   67936 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.199195   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.199215   67936 pod_ready.go:82] duration metric: took 6.088761ms for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.199226   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.199236   67936 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.205076   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "etcd-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.205095   67936 pod_ready.go:82] duration metric: took 5.848521ms for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.205105   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "etcd-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.205111   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.210559   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-apiserver-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.210578   67936 pod_ready.go:82] duration metric: took 5.449861ms for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.210587   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-apiserver-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.210594   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.281799   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.281828   67936 pod_ready.go:82] duration metric: took 71.206144ms for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.281840   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.281850   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.680097   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-proxy-bwb9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.680121   67936 pod_ready.go:82] duration metric: took 398.261641ms for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.680131   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-proxy-bwb9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.680136   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:50.080391   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-scheduler-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.080415   67936 pod_ready.go:82] duration metric: took 400.272871ms for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:50.080425   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-scheduler-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.080430   67936 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:50.482715   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.482744   67936 pod_ready.go:82] duration metric: took 402.304556ms for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:50.482753   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.482761   67936 pod_ready.go:39] duration metric: took 1.294109816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:50.482779   67936 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:37:50.495888   67936 ops.go:34] apiserver oom_adj: -16
	I0815 18:37:50.495912   67936 kubeadm.go:597] duration metric: took 9.4929178s to restartPrimaryControlPlane
	I0815 18:37:50.495924   67936 kubeadm.go:394] duration metric: took 9.549167115s to StartCluster
	I0815 18:37:50.495943   67936 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:50.496020   67936 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:37:50.497743   67936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:50.497976   67936 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:37:50.498166   67936 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:37:50.498225   67936 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:37:50.498251   67936 addons.go:69] Setting storage-provisioner=true in profile "no-preload-599042"
	I0815 18:37:50.498266   67936 addons.go:69] Setting default-storageclass=true in profile "no-preload-599042"
	I0815 18:37:50.498287   67936 addons.go:234] Setting addon storage-provisioner=true in "no-preload-599042"
	I0815 18:37:50.498303   67936 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-599042"
	W0815 18:37:50.498311   67936 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:37:50.498343   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.498708   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.498733   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.498745   67936 addons.go:69] Setting metrics-server=true in profile "no-preload-599042"
	I0815 18:37:50.498753   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.498783   67936 addons.go:234] Setting addon metrics-server=true in "no-preload-599042"
	W0815 18:37:50.498795   67936 addons.go:243] addon metrics-server should already be in state true
	I0815 18:37:50.498734   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.499070   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.499350   67936 out.go:177] * Verifying Kubernetes components...
	I0815 18:37:50.499436   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.499467   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.500629   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:50.514727   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
	I0815 18:37:50.514956   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0815 18:37:50.515112   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.515379   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.515622   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.515639   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.515844   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.515866   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.516028   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.516697   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.516741   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.516854   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.517455   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.517487   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.517879   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0815 18:37:50.518180   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.518645   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.518666   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.518975   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.519155   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.522283   67936 addons.go:234] Setting addon default-storageclass=true in "no-preload-599042"
	W0815 18:37:50.522301   67936 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:37:50.522321   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.522589   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.522616   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.533306   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I0815 18:37:50.533891   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.534378   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.534403   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.535077   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.535251   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.536333   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I0815 18:37:50.536960   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.537421   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.537484   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.537500   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.537581   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40905
	I0815 18:37:50.537832   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.537992   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.538044   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.538964   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.538983   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.539442   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.539494   67936 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:37:50.540127   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.540138   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.540166   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.540633   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:37:50.540653   67936 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:37:50.540673   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.541641   67936 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:47.658449   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:50.159642   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:50.542848   67936 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:37:50.542867   67936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:37:50.542883   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.544059   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.544644   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.544669   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.544879   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.545056   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.545226   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.545363   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.545609   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.545957   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.545984   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.546188   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.546350   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.546459   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.546563   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.576049   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0815 18:37:50.576398   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.576963   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.576991   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.577315   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.577536   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.579041   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.579244   67936 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:37:50.579259   67936 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:37:50.579273   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.583471   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.583857   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.583884   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.583984   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.584140   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.584298   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.584431   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.711232   67936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:50.738297   67936 node_ready.go:35] waiting up to 6m0s for node "no-preload-599042" to be "Ready" ...
	I0815 18:37:50.787041   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:37:50.876459   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:37:50.926707   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:37:50.926727   67936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:37:50.967734   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:37:50.967764   67936 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:37:50.994557   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:50.994580   67936 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:37:51.018573   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:51.217167   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.217199   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.217511   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.217561   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.217570   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.217579   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.217592   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.217846   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.217889   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.217900   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.223755   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.223774   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.224006   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.224024   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.794888   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.794919   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.795198   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.795227   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.795240   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.795256   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.795267   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.795503   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.795521   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936158   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.936178   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.936438   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.936467   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.936505   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936519   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.936528   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.936754   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.936773   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936785   67936 addons.go:475] Verifying addon metrics-server=true in "no-preload-599042"
	I0815 18:37:51.938619   67936 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0815 18:37:47.901026   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.401023   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.901661   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.401358   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.901410   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.401040   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.901695   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.401365   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.901733   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:52.401439   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.939743   67936 addons.go:510] duration metric: took 1.441583595s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0815 18:37:52.742152   67936 node_ready.go:53] node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:51.155350   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:53.654487   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:52.658151   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:54.658269   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:52.901361   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.401417   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.901380   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.401820   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.901113   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.401270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.900941   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.901834   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:57.401496   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.242506   67936 node_ready.go:53] node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:57.742723   67936 node_ready.go:49] node "no-preload-599042" has status "Ready":"True"
	I0815 18:37:57.742746   67936 node_ready.go:38] duration metric: took 7.00442012s for node "no-preload-599042" to be "Ready" ...
	I0815 18:37:57.742764   67936 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:57.747927   67936 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:57.752478   67936 pod_ready.go:93] pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:57.752513   67936 pod_ready.go:82] duration metric: took 4.560553ms for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:57.752524   67936 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.760896   67936 pod_ready.go:93] pod "etcd-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.760924   67936 pod_ready.go:82] duration metric: took 1.008390436s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.760937   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.774529   67936 pod_ready.go:93] pod "kube-apiserver-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.774557   67936 pod_ready.go:82] duration metric: took 13.611063ms for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.774568   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.793851   67936 pod_ready.go:93] pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.793873   67936 pod_ready.go:82] duration metric: took 19.297089ms for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.793885   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.943096   67936 pod_ready.go:93] pod "kube-proxy-bwb9h" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.943120   67936 pod_ready.go:82] duration metric: took 149.227014ms for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.943129   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:56.154874   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:58.655280   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:57.158586   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:59.159257   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:57.901938   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.401246   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.900950   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.400984   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.401707   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.901455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.901613   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:02.401302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.342426   67936 pod_ready.go:93] pod "kube-scheduler-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:59.342447   67936 pod_ready.go:82] duration metric: took 399.312035ms for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:59.342460   67936 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	I0815 18:38:01.349419   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:03.848558   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:01.154194   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:03.154779   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:01.658502   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:04.158895   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:02.901914   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.401357   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.901258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.400961   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.401852   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.901115   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.401170   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.901694   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:07.401816   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.849586   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:08.349057   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:05.155847   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:07.653607   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:09.654245   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:06.658092   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:08.659361   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:07.900966   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.401136   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.901534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.400982   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.901126   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.401120   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.901175   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.401704   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.901710   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:12.401712   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.349443   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:12.349942   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:11.655212   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:14.154508   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:11.158562   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:13.657985   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:15.658088   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:12.901680   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.401532   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.901198   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:13.901295   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:13.938743   68713 cri.go:89] found id: ""
	I0815 18:38:13.938770   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.938778   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:13.938786   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:13.938843   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:13.971997   68713 cri.go:89] found id: ""
	I0815 18:38:13.972029   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.972041   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:13.972048   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:13.972111   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:14.006793   68713 cri.go:89] found id: ""
	I0815 18:38:14.006825   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.006836   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:14.006844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:14.006903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:14.041546   68713 cri.go:89] found id: ""
	I0815 18:38:14.041575   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.041587   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:14.041595   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:14.041680   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:14.077614   68713 cri.go:89] found id: ""
	I0815 18:38:14.077639   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.077648   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:14.077653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:14.077704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:14.113683   68713 cri.go:89] found id: ""
	I0815 18:38:14.113711   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.113721   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:14.113730   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:14.113790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:14.149581   68713 cri.go:89] found id: ""
	I0815 18:38:14.149608   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.149616   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:14.149622   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:14.149678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:14.191576   68713 cri.go:89] found id: ""
	I0815 18:38:14.191606   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.191614   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:14.191622   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:14.191635   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:14.243253   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:14.243287   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:14.256818   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:14.256845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:14.382914   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:14.382933   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:14.382948   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:14.461826   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:14.461859   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.005615   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:17.020977   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:17.021042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:17.070191   68713 cri.go:89] found id: ""
	I0815 18:38:17.070220   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.070232   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:17.070239   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:17.070301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:17.118582   68713 cri.go:89] found id: ""
	I0815 18:38:17.118612   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.118624   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:17.118631   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:17.118693   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:17.165380   68713 cri.go:89] found id: ""
	I0815 18:38:17.165404   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.165413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:17.165421   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:17.165483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:17.204630   68713 cri.go:89] found id: ""
	I0815 18:38:17.204660   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.204670   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:17.204678   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:17.204740   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:17.239182   68713 cri.go:89] found id: ""
	I0815 18:38:17.239210   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.239219   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:17.239226   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:17.239285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:17.276329   68713 cri.go:89] found id: ""
	I0815 18:38:17.276356   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.276367   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:17.276375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:17.276472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:17.312387   68713 cri.go:89] found id: ""
	I0815 18:38:17.312418   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.312427   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:17.312433   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:17.312485   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:17.348277   68713 cri.go:89] found id: ""
	I0815 18:38:17.348300   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.348308   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:17.348315   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:17.348334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:17.424886   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:17.424924   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.465491   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:17.465518   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:17.517687   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:17.517719   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:17.531928   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:17.531959   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:17.606987   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:14.849001   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:17.349912   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:16.155496   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:18.653621   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:18.159850   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.658717   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.107740   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:20.123194   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:20.123255   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:20.163586   68713 cri.go:89] found id: ""
	I0815 18:38:20.163608   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.163619   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:20.163627   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:20.163676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:20.200171   68713 cri.go:89] found id: ""
	I0815 18:38:20.200196   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.200204   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:20.200210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:20.200270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:20.234739   68713 cri.go:89] found id: ""
	I0815 18:38:20.234770   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.234781   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:20.234788   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:20.234849   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:20.270182   68713 cri.go:89] found id: ""
	I0815 18:38:20.270206   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.270215   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:20.270220   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:20.270281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:20.303643   68713 cri.go:89] found id: ""
	I0815 18:38:20.303672   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.303682   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:20.303690   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:20.303748   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:20.339399   68713 cri.go:89] found id: ""
	I0815 18:38:20.339431   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.339441   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:20.339449   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:20.339511   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:20.377220   68713 cri.go:89] found id: ""
	I0815 18:38:20.377245   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.377252   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:20.377258   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:20.377310   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:20.411202   68713 cri.go:89] found id: ""
	I0815 18:38:20.411238   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.411249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:20.411268   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:20.411282   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:20.462846   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:20.462879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:20.476569   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:20.476597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:20.554243   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:20.554269   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:20.554285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:20.637450   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:20.637493   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:19.849194   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:21.849502   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.655378   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.154633   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.160747   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:25.658706   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.182633   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:23.196953   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:23.197026   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:23.232011   68713 cri.go:89] found id: ""
	I0815 18:38:23.232039   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.232051   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:23.232064   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:23.232114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:23.266963   68713 cri.go:89] found id: ""
	I0815 18:38:23.266992   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.267000   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:23.267006   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:23.267069   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:23.306473   68713 cri.go:89] found id: ""
	I0815 18:38:23.306496   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.306504   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:23.306510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:23.306574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:23.343542   68713 cri.go:89] found id: ""
	I0815 18:38:23.343574   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.343585   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:23.343592   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:23.343652   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:23.382468   68713 cri.go:89] found id: ""
	I0815 18:38:23.382527   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.382539   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:23.382547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:23.382612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:23.418857   68713 cri.go:89] found id: ""
	I0815 18:38:23.418882   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.418891   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:23.418897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:23.418956   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:23.460971   68713 cri.go:89] found id: ""
	I0815 18:38:23.461004   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.461016   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:23.461023   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:23.461100   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:23.494139   68713 cri.go:89] found id: ""
	I0815 18:38:23.494172   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.494183   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:23.494194   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:23.494208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:23.547874   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:23.547908   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:23.562251   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:23.562278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:23.636503   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:23.636528   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:23.636545   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:23.716020   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:23.716051   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.255081   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:26.270118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:26.270184   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:26.308586   68713 cri.go:89] found id: ""
	I0815 18:38:26.308612   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.308623   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:26.308630   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:26.308688   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:26.344364   68713 cri.go:89] found id: ""
	I0815 18:38:26.344394   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.344410   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:26.344418   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:26.344533   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:26.381621   68713 cri.go:89] found id: ""
	I0815 18:38:26.381642   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.381650   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:26.381655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:26.381699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:26.416091   68713 cri.go:89] found id: ""
	I0815 18:38:26.416118   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.416128   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:26.416136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:26.416195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:26.456038   68713 cri.go:89] found id: ""
	I0815 18:38:26.456068   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.456080   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:26.456088   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:26.456151   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:26.490728   68713 cri.go:89] found id: ""
	I0815 18:38:26.490758   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.490769   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:26.490776   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:26.490837   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:26.529388   68713 cri.go:89] found id: ""
	I0815 18:38:26.529422   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.529434   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:26.529440   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:26.529489   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:26.567452   68713 cri.go:89] found id: ""
	I0815 18:38:26.567475   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.567484   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:26.567491   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:26.567503   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:26.641841   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:26.641863   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:26.641879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:26.719403   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:26.719438   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.760460   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:26.760507   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:26.814450   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:26.814480   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:24.349319   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:26.850207   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:25.155213   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:27.654265   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:29.656816   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:27.663849   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:30.158417   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:29.329451   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:29.344634   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:29.344706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:29.379278   68713 cri.go:89] found id: ""
	I0815 18:38:29.379308   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.379319   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:29.379326   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:29.379385   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:29.411854   68713 cri.go:89] found id: ""
	I0815 18:38:29.411881   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.411891   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:29.411898   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:29.411965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:29.443975   68713 cri.go:89] found id: ""
	I0815 18:38:29.444004   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.444014   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:29.444022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:29.444081   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:29.477919   68713 cri.go:89] found id: ""
	I0815 18:38:29.477944   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.477954   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:29.477962   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:29.478020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:29.518944   68713 cri.go:89] found id: ""
	I0815 18:38:29.518973   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.518985   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:29.518992   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:29.519052   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:29.553876   68713 cri.go:89] found id: ""
	I0815 18:38:29.553903   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.553913   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:29.553921   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:29.553974   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:29.590768   68713 cri.go:89] found id: ""
	I0815 18:38:29.590804   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.590815   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:29.590823   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:29.590879   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:29.625553   68713 cri.go:89] found id: ""
	I0815 18:38:29.625578   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.625586   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:29.625595   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:29.625606   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:29.668447   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:29.668478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:29.721002   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:29.721035   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:29.734955   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:29.734983   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:29.808703   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:29.808726   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:29.808742   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.397781   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:32.413876   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:32.413937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:32.453689   68713 cri.go:89] found id: ""
	I0815 18:38:32.453720   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.453777   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:32.453791   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:32.453839   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:32.490529   68713 cri.go:89] found id: ""
	I0815 18:38:32.490559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.490567   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:32.490573   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:32.490622   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:32.527680   68713 cri.go:89] found id: ""
	I0815 18:38:32.527710   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.527720   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:32.527727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:32.527790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:32.564619   68713 cri.go:89] found id: ""
	I0815 18:38:32.564656   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.564667   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:32.564677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:32.564745   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:32.600530   68713 cri.go:89] found id: ""
	I0815 18:38:32.600559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.600570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:32.600577   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:32.600639   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:32.636779   68713 cri.go:89] found id: ""
	I0815 18:38:32.636813   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.636821   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:32.636828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:32.636897   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:32.673743   68713 cri.go:89] found id: ""
	I0815 18:38:32.673774   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.673786   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:32.673794   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:32.673853   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:32.709678   68713 cri.go:89] found id: ""
	I0815 18:38:32.709708   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.709719   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:32.709730   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:32.709744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.785961   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:32.785998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:29.349763   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:31.350398   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:33.848873   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.155992   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:34.654825   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.159855   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:34.657783   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.828205   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:32.828237   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:32.894624   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:32.894666   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:32.910742   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:32.910769   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:32.980853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:35.481438   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:35.495373   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:35.495444   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:35.529184   68713 cri.go:89] found id: ""
	I0815 18:38:35.529212   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.529221   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:35.529226   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:35.529275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:35.565188   68713 cri.go:89] found id: ""
	I0815 18:38:35.565214   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.565221   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:35.565227   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:35.565281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:35.600386   68713 cri.go:89] found id: ""
	I0815 18:38:35.600416   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.600428   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:35.600435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:35.600519   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:35.634255   68713 cri.go:89] found id: ""
	I0815 18:38:35.634278   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.634287   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:35.634293   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:35.634339   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:35.670236   68713 cri.go:89] found id: ""
	I0815 18:38:35.670260   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.670268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:35.670273   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:35.670354   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:35.707691   68713 cri.go:89] found id: ""
	I0815 18:38:35.707714   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.707722   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:35.707727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:35.707782   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:35.745791   68713 cri.go:89] found id: ""
	I0815 18:38:35.745820   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.745832   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:35.745844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:35.745916   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:35.784167   68713 cri.go:89] found id: ""
	I0815 18:38:35.784195   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.784205   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:35.784217   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:35.784234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:35.864681   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:35.864711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:35.906831   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:35.906858   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:35.960328   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:35.960366   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:35.974401   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:35.974428   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:36.044789   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:35.849744   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:38.348058   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:36.654916   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:39.155585   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:36.658767   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:39.159236   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:38.545951   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:38.561473   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:38.561540   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:38.597621   68713 cri.go:89] found id: ""
	I0815 18:38:38.597658   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.597668   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:38.597679   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:38.597756   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:38.632081   68713 cri.go:89] found id: ""
	I0815 18:38:38.632115   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.632127   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:38.632135   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:38.632203   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:38.669917   68713 cri.go:89] found id: ""
	I0815 18:38:38.669944   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.669952   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:38.669958   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:38.670015   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:38.707552   68713 cri.go:89] found id: ""
	I0815 18:38:38.707574   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.707582   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:38.707588   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:38.707642   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:38.746054   68713 cri.go:89] found id: ""
	I0815 18:38:38.746082   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.746093   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:38.746101   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:38.746166   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:38.783901   68713 cri.go:89] found id: ""
	I0815 18:38:38.783933   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.783945   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:38.783952   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:38.784018   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:38.825411   68713 cri.go:89] found id: ""
	I0815 18:38:38.825441   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.825452   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:38.825459   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:38.825520   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:38.863174   68713 cri.go:89] found id: ""
	I0815 18:38:38.863219   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.863231   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:38.863241   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:38.863254   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:38.914016   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:38.914045   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:38.927634   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:38.927659   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:38.993380   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:38.993407   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:38.993422   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:39.077075   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:39.077116   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:41.620219   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:41.633572   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:41.633628   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:41.670330   68713 cri.go:89] found id: ""
	I0815 18:38:41.670351   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.670358   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:41.670364   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:41.670418   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:41.706467   68713 cri.go:89] found id: ""
	I0815 18:38:41.706494   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.706502   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:41.706508   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:41.706564   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:41.742915   68713 cri.go:89] found id: ""
	I0815 18:38:41.742958   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.742970   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:41.742978   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:41.743044   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:41.778650   68713 cri.go:89] found id: ""
	I0815 18:38:41.778679   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.778687   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:41.778692   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:41.778739   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:41.813329   68713 cri.go:89] found id: ""
	I0815 18:38:41.813358   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.813369   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:41.813375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:41.813427   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:41.851351   68713 cri.go:89] found id: ""
	I0815 18:38:41.851383   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.851391   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:41.851398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:41.851460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:41.895097   68713 cri.go:89] found id: ""
	I0815 18:38:41.895130   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.895142   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:41.895150   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:41.895209   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:41.931306   68713 cri.go:89] found id: ""
	I0815 18:38:41.931336   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.931353   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:41.931365   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:41.931381   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:41.944796   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:41.944828   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:42.018868   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:42.018891   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:42.018903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:42.104304   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:42.104340   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:42.143625   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:42.143655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:40.349197   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:42.850034   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:41.655478   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:44.155025   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:41.159976   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:43.658013   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:45.658358   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:44.698568   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:44.712171   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:44.712247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:44.747043   68713 cri.go:89] found id: ""
	I0815 18:38:44.747071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.747079   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:44.747085   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:44.747143   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:44.782660   68713 cri.go:89] found id: ""
	I0815 18:38:44.782691   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.782703   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:44.782711   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:44.782765   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:44.821111   68713 cri.go:89] found id: ""
	I0815 18:38:44.821138   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.821146   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:44.821152   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:44.821222   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:44.859602   68713 cri.go:89] found id: ""
	I0815 18:38:44.859635   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.859647   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:44.859655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:44.859717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:44.895037   68713 cri.go:89] found id: ""
	I0815 18:38:44.895071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.895083   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:44.895090   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:44.895175   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:44.928729   68713 cri.go:89] found id: ""
	I0815 18:38:44.928759   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.928771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:44.928781   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:44.928844   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:44.963945   68713 cri.go:89] found id: ""
	I0815 18:38:44.963977   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.963987   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:44.963996   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:44.964060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:45.001166   68713 cri.go:89] found id: ""
	I0815 18:38:45.001195   68713 logs.go:276] 0 containers: []
	W0815 18:38:45.001206   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:45.001218   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:45.001234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:45.015181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:45.015209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:45.084297   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:45.084322   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:45.084334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:45.173833   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:45.173866   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:45.211863   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:45.211899   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:47.771009   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:47.784865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:47.784926   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:44.850332   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.347985   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:46.654797   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:48.654936   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.658823   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:50.178115   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.818497   68713 cri.go:89] found id: ""
	I0815 18:38:47.818526   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.818538   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:47.818545   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:47.818608   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:47.857900   68713 cri.go:89] found id: ""
	I0815 18:38:47.857927   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.857935   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:47.857941   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:47.857997   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:47.895778   68713 cri.go:89] found id: ""
	I0815 18:38:47.895809   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.895822   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:47.895829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:47.895887   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:47.937410   68713 cri.go:89] found id: ""
	I0815 18:38:47.937434   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.937442   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:47.937448   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:47.937505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:47.976414   68713 cri.go:89] found id: ""
	I0815 18:38:47.976442   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.976450   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:47.976455   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:47.976525   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:48.014863   68713 cri.go:89] found id: ""
	I0815 18:38:48.014891   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.014899   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:48.014906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:48.014969   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:48.053508   68713 cri.go:89] found id: ""
	I0815 18:38:48.053536   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.053546   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:48.053554   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:48.053624   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:48.088900   68713 cri.go:89] found id: ""
	I0815 18:38:48.088931   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.088943   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:48.088954   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:48.088969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:48.140415   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:48.140447   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:48.155958   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:48.155985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:48.229338   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:48.229368   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:48.229383   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:48.317776   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:48.317814   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:50.860592   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:50.877070   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:50.877154   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:50.937590   68713 cri.go:89] found id: ""
	I0815 18:38:50.937615   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.937622   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:50.937628   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:50.937687   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:50.972573   68713 cri.go:89] found id: ""
	I0815 18:38:50.972603   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.972614   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:50.972622   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:50.972683   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:51.008786   68713 cri.go:89] found id: ""
	I0815 18:38:51.008811   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.008820   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:51.008826   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:51.008875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:51.043076   68713 cri.go:89] found id: ""
	I0815 18:38:51.043105   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.043116   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:51.043123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:51.043186   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:51.078344   68713 cri.go:89] found id: ""
	I0815 18:38:51.078379   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.078391   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:51.078398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:51.078453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:51.114494   68713 cri.go:89] found id: ""
	I0815 18:38:51.114521   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.114532   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:51.114540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:51.114600   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:51.153871   68713 cri.go:89] found id: ""
	I0815 18:38:51.153898   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.153909   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:51.153917   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:51.153984   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:51.187908   68713 cri.go:89] found id: ""
	I0815 18:38:51.187937   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.187948   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:51.187959   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:51.187974   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:51.264172   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:51.264198   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:51.264214   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:51.345238   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:51.345285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:51.385720   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:51.385745   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:51.443313   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:51.443353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:49.849156   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:52.348628   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:51.154188   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:53.155256   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:52.658773   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:54.659127   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:53.959176   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:53.972031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:53.972101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:54.010673   68713 cri.go:89] found id: ""
	I0815 18:38:54.010699   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.010707   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:54.010714   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:54.010775   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:54.045632   68713 cri.go:89] found id: ""
	I0815 18:38:54.045662   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.045672   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:54.045678   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:54.045727   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:54.082111   68713 cri.go:89] found id: ""
	I0815 18:38:54.082134   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.082142   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:54.082148   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:54.082206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:54.118210   68713 cri.go:89] found id: ""
	I0815 18:38:54.118232   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.118239   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:54.118246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:54.118305   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:54.155474   68713 cri.go:89] found id: ""
	I0815 18:38:54.155499   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.155508   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:54.155515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:54.155572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:54.193263   68713 cri.go:89] found id: ""
	I0815 18:38:54.193298   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.193305   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:54.193311   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:54.193365   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:54.233389   68713 cri.go:89] found id: ""
	I0815 18:38:54.233416   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.233428   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:54.233435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:54.233502   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:54.266127   68713 cri.go:89] found id: ""
	I0815 18:38:54.266155   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.266164   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:54.266176   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:54.266199   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:54.318724   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:54.318762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:54.332993   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:54.333022   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:54.405895   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:54.405915   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:54.405926   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:54.485819   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:54.485875   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.024956   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:57.038182   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:57.038246   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:57.078020   68713 cri.go:89] found id: ""
	I0815 18:38:57.078044   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.078055   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:57.078063   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:57.078127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:57.115077   68713 cri.go:89] found id: ""
	I0815 18:38:57.115101   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.115110   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:57.115118   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:57.115179   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:57.152711   68713 cri.go:89] found id: ""
	I0815 18:38:57.152737   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.152747   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:57.152755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:57.152819   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:57.190000   68713 cri.go:89] found id: ""
	I0815 18:38:57.190034   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.190042   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:57.190048   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:57.190096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:57.224947   68713 cri.go:89] found id: ""
	I0815 18:38:57.224978   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.224990   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:57.224998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:57.225060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:57.262329   68713 cri.go:89] found id: ""
	I0815 18:38:57.262365   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.262375   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:57.262383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:57.262458   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:57.299471   68713 cri.go:89] found id: ""
	I0815 18:38:57.299498   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.299507   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:57.299513   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:57.299572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:57.357163   68713 cri.go:89] found id: ""
	I0815 18:38:57.357202   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.357211   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:57.357220   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:57.357236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.405154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:57.405184   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:57.459245   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:57.459277   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:57.473663   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:57.473699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:57.546670   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:57.546699   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:57.546715   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:54.348864   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:56.848276   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:58.849461   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:55.655015   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:58.158306   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:56.662722   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:59.159559   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:00.124455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:00.137985   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:00.138053   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:00.175201   68713 cri.go:89] found id: ""
	I0815 18:39:00.175231   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.175242   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:00.175250   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:00.175328   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:00.209376   68713 cri.go:89] found id: ""
	I0815 18:39:00.209406   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.209418   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:00.209426   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:00.209484   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:00.246860   68713 cri.go:89] found id: ""
	I0815 18:39:00.246889   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.246899   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:00.246906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:00.246965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:00.282787   68713 cri.go:89] found id: ""
	I0815 18:39:00.282814   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.282823   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:00.282829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:00.282875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:00.330227   68713 cri.go:89] found id: ""
	I0815 18:39:00.330256   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.330268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:00.330275   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:00.330338   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:00.363028   68713 cri.go:89] found id: ""
	I0815 18:39:00.363061   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.363072   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:00.363079   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:00.363169   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:00.400484   68713 cri.go:89] found id: ""
	I0815 18:39:00.400522   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.400533   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:00.400540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:00.400597   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:00.436187   68713 cri.go:89] found id: ""
	I0815 18:39:00.436225   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.436238   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:00.436252   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:00.436267   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:00.481960   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:00.481985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:00.535103   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:00.535138   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:00.548541   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:00.548568   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:00.619476   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:00.619505   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:00.619525   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:01.347916   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.349448   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:00.654384   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.155048   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:01.658374   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.658824   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.206473   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:03.222893   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:03.222967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:03.272249   68713 cri.go:89] found id: ""
	I0815 18:39:03.272275   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.272283   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:03.272291   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:03.272355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:03.336786   68713 cri.go:89] found id: ""
	I0815 18:39:03.336811   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.336819   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:03.336825   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:03.336884   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:03.375977   68713 cri.go:89] found id: ""
	I0815 18:39:03.376002   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.376011   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:03.376016   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:03.376063   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:03.410304   68713 cri.go:89] found id: ""
	I0815 18:39:03.410326   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.410335   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:03.410340   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:03.410403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:03.446147   68713 cri.go:89] found id: ""
	I0815 18:39:03.446176   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.446188   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:03.446195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:03.446256   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:03.482555   68713 cri.go:89] found id: ""
	I0815 18:39:03.482582   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.482591   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:03.482597   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:03.482654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:03.519477   68713 cri.go:89] found id: ""
	I0815 18:39:03.519503   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.519511   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:03.519517   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:03.519574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:03.556539   68713 cri.go:89] found id: ""
	I0815 18:39:03.556566   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.556577   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:03.556587   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:03.556602   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:03.610553   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:03.610593   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:03.625799   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:03.625827   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:03.697106   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:03.697132   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:03.697149   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:03.779089   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:03.779120   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:06.319280   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:06.333284   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:06.333355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:06.369696   68713 cri.go:89] found id: ""
	I0815 18:39:06.369719   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.369727   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:06.369732   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:06.369780   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:06.405023   68713 cri.go:89] found id: ""
	I0815 18:39:06.405046   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.405053   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:06.405059   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:06.405113   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:06.439948   68713 cri.go:89] found id: ""
	I0815 18:39:06.439974   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.439983   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:06.439989   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:06.440048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:06.475613   68713 cri.go:89] found id: ""
	I0815 18:39:06.475642   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.475654   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:06.475664   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:06.475723   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:06.510698   68713 cri.go:89] found id: ""
	I0815 18:39:06.510721   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.510729   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:06.510735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:06.510783   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:06.545831   68713 cri.go:89] found id: ""
	I0815 18:39:06.545861   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.545873   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:06.545880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:06.545940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:06.579027   68713 cri.go:89] found id: ""
	I0815 18:39:06.579053   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.579064   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:06.579072   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:06.579132   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:06.615308   68713 cri.go:89] found id: ""
	I0815 18:39:06.615339   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.615352   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:06.615371   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:06.615396   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:06.671523   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:06.671555   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:06.685556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:06.685580   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:06.765036   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:06.765059   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:06.765071   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:06.843412   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:06.843457   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:05.849018   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:07.849342   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:05.654854   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:07.654910   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:09.655240   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:06.158409   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:08.657902   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:10.658258   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:09.390799   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:09.404099   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:09.404160   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:09.439534   68713 cri.go:89] found id: ""
	I0815 18:39:09.439563   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.439582   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:09.439591   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:09.439654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:09.478933   68713 cri.go:89] found id: ""
	I0815 18:39:09.478963   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.478974   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:09.478982   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:09.479042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:09.514396   68713 cri.go:89] found id: ""
	I0815 18:39:09.514425   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.514436   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:09.514444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:09.514510   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:09.547749   68713 cri.go:89] found id: ""
	I0815 18:39:09.547775   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.547785   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:09.547793   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:09.547856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:09.583583   68713 cri.go:89] found id: ""
	I0815 18:39:09.583611   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.583623   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:09.583631   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:09.583695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:09.616530   68713 cri.go:89] found id: ""
	I0815 18:39:09.616560   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.616570   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:09.616576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:09.616641   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:09.655167   68713 cri.go:89] found id: ""
	I0815 18:39:09.655189   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.655199   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:09.655207   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:09.655263   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:09.691368   68713 cri.go:89] found id: ""
	I0815 18:39:09.691391   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.691398   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:09.691411   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:09.691426   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:09.740739   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:09.740770   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:09.755049   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:09.755074   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:09.825053   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:09.825080   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:09.825095   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:09.903036   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:09.903076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:12.441898   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:12.454637   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:12.454712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:12.494604   68713 cri.go:89] found id: ""
	I0815 18:39:12.494632   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.494640   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:12.494646   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:12.494699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:12.531587   68713 cri.go:89] found id: ""
	I0815 18:39:12.531631   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.531642   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:12.531649   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:12.531710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:12.564991   68713 cri.go:89] found id: ""
	I0815 18:39:12.565014   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.565021   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:12.565027   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:12.565096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:12.600667   68713 cri.go:89] found id: ""
	I0815 18:39:12.600698   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.600709   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:12.600715   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:12.600777   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:12.633658   68713 cri.go:89] found id: ""
	I0815 18:39:12.633681   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.633691   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:12.633698   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:12.633759   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:12.673709   68713 cri.go:89] found id: ""
	I0815 18:39:12.673730   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.673737   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:12.673743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:12.673790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:12.707353   68713 cri.go:89] found id: ""
	I0815 18:39:12.707378   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.707385   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:12.707390   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:12.707437   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:12.746926   68713 cri.go:89] found id: ""
	I0815 18:39:12.746949   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.746957   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:12.746965   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:12.746977   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:09.853116   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:12.348297   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:11.655347   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:14.154929   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:13.158257   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:15.158457   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:12.792154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:12.792180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:12.843933   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:12.843968   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:12.859583   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:12.859609   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:12.940856   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:12.940880   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:12.940895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:15.520265   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:15.533677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:15.533754   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:15.572109   68713 cri.go:89] found id: ""
	I0815 18:39:15.572135   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.572145   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:15.572153   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:15.572221   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:15.607442   68713 cri.go:89] found id: ""
	I0815 18:39:15.607472   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.607484   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:15.607492   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:15.607551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:15.642206   68713 cri.go:89] found id: ""
	I0815 18:39:15.642230   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.642238   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:15.642246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:15.642308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:15.677914   68713 cri.go:89] found id: ""
	I0815 18:39:15.677945   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.677956   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:15.677963   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:15.678049   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:15.714466   68713 cri.go:89] found id: ""
	I0815 18:39:15.714496   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.714504   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:15.714510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:15.714563   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:15.750961   68713 cri.go:89] found id: ""
	I0815 18:39:15.750987   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.750995   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:15.751002   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:15.751050   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:15.785399   68713 cri.go:89] found id: ""
	I0815 18:39:15.785434   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.785444   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:15.785450   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:15.785498   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:15.821547   68713 cri.go:89] found id: ""
	I0815 18:39:15.821571   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.821578   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:15.821586   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:15.821597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:15.875299   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:15.875329   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:15.890376   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:15.890408   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:15.957317   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:15.957337   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:15.957352   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:16.033952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:16.033997   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:14.349171   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:16.849292   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.850822   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:16.654572   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.656041   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:17.657984   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:19.658366   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.571953   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:18.584652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:18.584721   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:18.617043   68713 cri.go:89] found id: ""
	I0815 18:39:18.617066   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.617073   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:18.617079   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:18.617127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:18.651080   68713 cri.go:89] found id: ""
	I0815 18:39:18.651112   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.651123   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:18.651130   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:18.651187   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:18.686857   68713 cri.go:89] found id: ""
	I0815 18:39:18.686890   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.686901   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:18.686909   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:18.686975   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:18.719397   68713 cri.go:89] found id: ""
	I0815 18:39:18.719434   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.719444   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:18.719452   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:18.719521   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:18.758316   68713 cri.go:89] found id: ""
	I0815 18:39:18.758349   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.758360   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:18.758366   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:18.758435   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:18.791586   68713 cri.go:89] found id: ""
	I0815 18:39:18.791609   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.791617   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:18.791623   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:18.791690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:18.827905   68713 cri.go:89] found id: ""
	I0815 18:39:18.827929   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.827937   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:18.827945   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:18.828004   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:18.869371   68713 cri.go:89] found id: ""
	I0815 18:39:18.869404   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.869412   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:18.869420   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:18.869432   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:18.920124   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:18.920158   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:18.936137   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:18.936168   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:19.006877   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:19.006902   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:19.006913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:19.088909   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:19.088953   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:21.632734   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:21.647246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:21.647322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:21.685574   68713 cri.go:89] found id: ""
	I0815 18:39:21.685606   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.685614   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:21.685620   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:21.685676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:21.717073   68713 cri.go:89] found id: ""
	I0815 18:39:21.717112   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.717124   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:21.717133   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:21.717205   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:21.752072   68713 cri.go:89] found id: ""
	I0815 18:39:21.752101   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.752112   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:21.752120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:21.752182   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:21.786811   68713 cri.go:89] found id: ""
	I0815 18:39:21.786834   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.786842   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:21.786848   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:21.786893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:21.823694   68713 cri.go:89] found id: ""
	I0815 18:39:21.823719   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.823728   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:21.823733   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:21.823790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:21.859358   68713 cri.go:89] found id: ""
	I0815 18:39:21.859387   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.859398   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:21.859405   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:21.859469   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:21.893723   68713 cri.go:89] found id: ""
	I0815 18:39:21.893751   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.893761   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:21.893769   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:21.893826   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:21.929338   68713 cri.go:89] found id: ""
	I0815 18:39:21.929368   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.929379   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:21.929388   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:21.929414   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:21.979107   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:21.979141   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:21.993968   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:21.994005   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:22.063359   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:22.063384   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:22.063398   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:22.144303   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:22.144337   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:21.348202   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.349199   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:21.154244   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.155954   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:21.658572   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.658782   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:25.658946   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:24.688104   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:24.701230   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:24.701298   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:24.735056   68713 cri.go:89] found id: ""
	I0815 18:39:24.735086   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.735097   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:24.735104   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:24.735172   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:24.769704   68713 cri.go:89] found id: ""
	I0815 18:39:24.769732   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.769743   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:24.769751   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:24.769812   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:24.808763   68713 cri.go:89] found id: ""
	I0815 18:39:24.808788   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.808796   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:24.808807   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:24.808856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:24.846997   68713 cri.go:89] found id: ""
	I0815 18:39:24.847028   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.847038   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:24.847045   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:24.847106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:24.881681   68713 cri.go:89] found id: ""
	I0815 18:39:24.881705   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.881713   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:24.881719   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:24.881772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:24.917000   68713 cri.go:89] found id: ""
	I0815 18:39:24.917024   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.917032   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:24.917040   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:24.917088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:24.951133   68713 cri.go:89] found id: ""
	I0815 18:39:24.951156   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.951164   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:24.951170   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:24.951218   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:24.987306   68713 cri.go:89] found id: ""
	I0815 18:39:24.987331   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.987339   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:24.987347   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:24.987360   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:25.039533   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:25.039566   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:25.053011   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:25.053036   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:25.125864   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:25.125884   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:25.125895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:25.209885   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:25.209916   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:27.751681   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:27.765316   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:27.765390   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:25.848840   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.849344   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:25.156068   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.654722   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:28.158317   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:30.158632   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.805820   68713 cri.go:89] found id: ""
	I0815 18:39:27.805858   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.805870   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:27.805878   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:27.805940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:27.846684   68713 cri.go:89] found id: ""
	I0815 18:39:27.846717   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.846727   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:27.846737   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:27.846801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:27.882326   68713 cri.go:89] found id: ""
	I0815 18:39:27.882358   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.882370   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:27.882378   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:27.882448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:27.917340   68713 cri.go:89] found id: ""
	I0815 18:39:27.917416   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.917431   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:27.917442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:27.917505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:27.952674   68713 cri.go:89] found id: ""
	I0815 18:39:27.952700   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.952708   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:27.952714   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:27.952763   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:27.986103   68713 cri.go:89] found id: ""
	I0815 18:39:27.986132   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.986143   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:27.986151   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:27.986212   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:28.023674   68713 cri.go:89] found id: ""
	I0815 18:39:28.023716   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.023735   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:28.023742   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:28.023807   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:28.064902   68713 cri.go:89] found id: ""
	I0815 18:39:28.064929   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.064937   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:28.064945   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:28.064957   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:28.116145   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:28.116180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:28.130435   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:28.130462   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:28.204899   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:28.204920   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:28.204931   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:28.284165   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:28.284202   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:30.824135   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:30.837515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:30.837583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:30.874671   68713 cri.go:89] found id: ""
	I0815 18:39:30.874695   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.874705   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:30.874712   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:30.874776   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:30.909990   68713 cri.go:89] found id: ""
	I0815 18:39:30.910027   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.910038   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:30.910045   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:30.910106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:30.946824   68713 cri.go:89] found id: ""
	I0815 18:39:30.946851   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.946859   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:30.946865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:30.946912   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:30.983392   68713 cri.go:89] found id: ""
	I0815 18:39:30.983419   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.983429   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:30.983437   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:30.983505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:31.023471   68713 cri.go:89] found id: ""
	I0815 18:39:31.023500   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.023510   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:31.023518   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:31.023583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:31.063586   68713 cri.go:89] found id: ""
	I0815 18:39:31.063616   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.063627   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:31.063636   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:31.063696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:31.103147   68713 cri.go:89] found id: ""
	I0815 18:39:31.103173   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.103180   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:31.103186   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:31.103237   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:31.144082   68713 cri.go:89] found id: ""
	I0815 18:39:31.144113   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.144124   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:31.144136   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:31.144150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:31.212535   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:31.212563   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:31.212586   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:31.292039   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:31.292076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:31.335023   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:31.335050   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:31.388817   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:31.388853   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:30.349110   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.349209   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:30.154683   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.653806   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:34.654716   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.658245   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:34.659119   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:33.925861   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:33.939604   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:33.939668   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:33.974538   68713 cri.go:89] found id: ""
	I0815 18:39:33.974563   68713 logs.go:276] 0 containers: []
	W0815 18:39:33.974571   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:33.974577   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:33.974634   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:34.009017   68713 cri.go:89] found id: ""
	I0815 18:39:34.009048   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.009058   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:34.009064   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:34.009120   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:34.049478   68713 cri.go:89] found id: ""
	I0815 18:39:34.049501   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.049517   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:34.049523   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:34.049576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:34.091011   68713 cri.go:89] found id: ""
	I0815 18:39:34.091040   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.091050   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:34.091056   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:34.091114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:34.126617   68713 cri.go:89] found id: ""
	I0815 18:39:34.126640   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.126650   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:34.126657   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:34.126706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:34.168140   68713 cri.go:89] found id: ""
	I0815 18:39:34.168169   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.168179   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:34.168187   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:34.168279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:34.205052   68713 cri.go:89] found id: ""
	I0815 18:39:34.205081   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.205091   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:34.205098   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:34.205173   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:34.238474   68713 cri.go:89] found id: ""
	I0815 18:39:34.238499   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.238506   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:34.238521   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:34.238540   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:34.280574   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:34.280601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:34.332662   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:34.332704   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:34.348556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:34.348591   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:34.421428   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:34.421450   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:34.421464   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.004855   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:37.019306   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:37.019378   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:37.057588   68713 cri.go:89] found id: ""
	I0815 18:39:37.057618   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.057626   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:37.057641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:37.057706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:37.095645   68713 cri.go:89] found id: ""
	I0815 18:39:37.095678   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.095687   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:37.095693   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:37.095750   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:37.131669   68713 cri.go:89] found id: ""
	I0815 18:39:37.131696   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.131711   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:37.131717   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:37.131772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:37.185065   68713 cri.go:89] found id: ""
	I0815 18:39:37.185097   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.185108   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:37.185115   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:37.185180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:37.220220   68713 cri.go:89] found id: ""
	I0815 18:39:37.220251   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.220262   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:37.220269   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:37.220322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:37.259816   68713 cri.go:89] found id: ""
	I0815 18:39:37.259849   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.259859   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:37.259868   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:37.259920   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:37.292777   68713 cri.go:89] found id: ""
	I0815 18:39:37.292807   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.292818   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:37.292825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:37.292888   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:37.328673   68713 cri.go:89] found id: ""
	I0815 18:39:37.328703   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.328714   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:37.328725   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:37.328740   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:37.379131   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:37.379172   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:37.392982   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:37.393017   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:37.470727   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:37.470750   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:37.470766   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.552353   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:37.552386   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:34.849108   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:37.349765   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:36.655101   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:39.154433   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:37.158746   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:39.658907   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:40.094008   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:40.107681   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:40.107753   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:40.142229   68713 cri.go:89] found id: ""
	I0815 18:39:40.142254   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.142264   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:40.142271   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:40.142333   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:40.180622   68713 cri.go:89] found id: ""
	I0815 18:39:40.180650   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.180665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:40.180672   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:40.180733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:40.219085   68713 cri.go:89] found id: ""
	I0815 18:39:40.219113   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.219120   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:40.219126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:40.219174   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:40.254807   68713 cri.go:89] found id: ""
	I0815 18:39:40.254838   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.254850   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:40.254858   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:40.254940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:40.290438   68713 cri.go:89] found id: ""
	I0815 18:39:40.290466   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.290478   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:40.290484   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:40.290547   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:40.326320   68713 cri.go:89] found id: ""
	I0815 18:39:40.326356   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.326364   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:40.326370   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:40.326429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:40.361538   68713 cri.go:89] found id: ""
	I0815 18:39:40.361563   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.361570   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:40.361576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:40.361629   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:40.397275   68713 cri.go:89] found id: ""
	I0815 18:39:40.397304   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.397316   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:40.397326   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:40.397342   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:40.466042   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:40.466064   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:40.466078   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:40.544915   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:40.544951   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:40.584992   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:40.585019   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:40.634792   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:40.634837   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:39.848609   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:41.849831   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:41.655153   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:43.655372   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:42.159650   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:44.658547   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:43.149819   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:43.164578   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:43.164649   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:43.199576   68713 cri.go:89] found id: ""
	I0815 18:39:43.199621   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.199633   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:43.199641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:43.199702   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:43.233783   68713 cri.go:89] found id: ""
	I0815 18:39:43.233820   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.233833   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:43.233840   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:43.233911   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:43.269406   68713 cri.go:89] found id: ""
	I0815 18:39:43.269437   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.269449   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:43.269457   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:43.269570   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:43.310686   68713 cri.go:89] found id: ""
	I0815 18:39:43.310715   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.310726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:43.310734   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:43.310795   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:43.348662   68713 cri.go:89] found id: ""
	I0815 18:39:43.348689   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.348699   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:43.348706   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:43.348767   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:43.385676   68713 cri.go:89] found id: ""
	I0815 18:39:43.385714   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.385726   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:43.385737   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:43.385802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:43.422605   68713 cri.go:89] found id: ""
	I0815 18:39:43.422634   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.422645   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:43.422653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:43.422712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:43.463208   68713 cri.go:89] found id: ""
	I0815 18:39:43.463238   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.463249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:43.463260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:43.463278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:43.476637   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:43.476664   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:43.552239   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:43.552263   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:43.552278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:43.653055   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:43.653108   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:43.699166   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:43.699192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.251725   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:46.265164   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:46.265240   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:46.305095   68713 cri.go:89] found id: ""
	I0815 18:39:46.305123   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.305133   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:46.305140   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:46.305196   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:46.349744   68713 cri.go:89] found id: ""
	I0815 18:39:46.349773   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.349783   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:46.349790   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:46.349858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:46.385807   68713 cri.go:89] found id: ""
	I0815 18:39:46.385839   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.385847   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:46.385853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:46.385908   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:46.419977   68713 cri.go:89] found id: ""
	I0815 18:39:46.420011   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.420024   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:46.420031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:46.420093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:46.454852   68713 cri.go:89] found id: ""
	I0815 18:39:46.454883   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.454894   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:46.454901   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:46.454962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:46.497157   68713 cri.go:89] found id: ""
	I0815 18:39:46.497192   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.497202   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:46.497210   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:46.497271   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:46.530191   68713 cri.go:89] found id: ""
	I0815 18:39:46.530218   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.530226   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:46.530232   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:46.530282   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:46.566024   68713 cri.go:89] found id: ""
	I0815 18:39:46.566050   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.566063   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:46.566074   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:46.566103   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.621969   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:46.622004   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:46.636576   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:46.636603   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:46.706819   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:46.706842   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:46.706857   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:46.786589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:46.786634   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:44.352685   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:46.849090   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:48.849424   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:45.655900   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:48.154862   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:46.658710   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:49.157317   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:49.324853   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:49.343543   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:49.343618   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:49.396260   68713 cri.go:89] found id: ""
	I0815 18:39:49.396292   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.396303   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:49.396311   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:49.396380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:49.437579   68713 cri.go:89] found id: ""
	I0815 18:39:49.437604   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.437612   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:49.437617   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:49.437663   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:49.476206   68713 cri.go:89] found id: ""
	I0815 18:39:49.476232   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.476239   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:49.476245   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:49.476296   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:49.511324   68713 cri.go:89] found id: ""
	I0815 18:39:49.511349   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.511357   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:49.511363   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:49.511428   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:49.545875   68713 cri.go:89] found id: ""
	I0815 18:39:49.545907   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.545916   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:49.545922   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:49.545981   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:49.582176   68713 cri.go:89] found id: ""
	I0815 18:39:49.582204   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.582228   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:49.582246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:49.582309   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:49.623288   68713 cri.go:89] found id: ""
	I0815 18:39:49.623318   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.623327   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:49.623333   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:49.623394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:49.662352   68713 cri.go:89] found id: ""
	I0815 18:39:49.662377   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.662389   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:49.662399   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:49.662424   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:49.745582   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:49.745617   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:49.785256   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:49.785295   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:49.835944   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:49.835979   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:49.852859   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:49.852886   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:49.928427   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.429223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:52.442384   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:52.442460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:52.480515   68713 cri.go:89] found id: ""
	I0815 18:39:52.480543   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.480553   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:52.480558   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:52.480605   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:52.518346   68713 cri.go:89] found id: ""
	I0815 18:39:52.518382   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.518393   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:52.518401   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:52.518460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:52.557696   68713 cri.go:89] found id: ""
	I0815 18:39:52.557722   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.557731   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:52.557736   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:52.557786   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:52.590849   68713 cri.go:89] found id: ""
	I0815 18:39:52.590879   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.590890   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:52.590898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:52.590961   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:52.629950   68713 cri.go:89] found id: ""
	I0815 18:39:52.629980   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.629992   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:52.629999   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:52.630047   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:52.666039   68713 cri.go:89] found id: ""
	I0815 18:39:52.666070   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.666081   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:52.666089   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:52.666146   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:52.699917   68713 cri.go:89] found id: ""
	I0815 18:39:52.699941   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.699949   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:52.699955   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:52.700001   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:52.735944   68713 cri.go:89] found id: ""
	I0815 18:39:52.735973   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.735981   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:52.735989   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:52.736001   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:39:50.849633   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:52.850298   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:50.155118   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:52.155166   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:54.653844   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:51.159401   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:53.658513   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	W0815 18:39:52.805519   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.805537   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:52.805559   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:52.894175   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:52.894213   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:52.932974   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:52.933006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:52.984206   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:52.984244   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.498477   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:55.511319   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:55.511380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:55.544899   68713 cri.go:89] found id: ""
	I0815 18:39:55.544928   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.544936   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:55.544943   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:55.545003   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:55.578821   68713 cri.go:89] found id: ""
	I0815 18:39:55.578855   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.578864   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:55.578869   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:55.578922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:55.615392   68713 cri.go:89] found id: ""
	I0815 18:39:55.615422   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.615434   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:55.615441   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:55.615501   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:55.653456   68713 cri.go:89] found id: ""
	I0815 18:39:55.653482   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.653493   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:55.653500   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:55.653558   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:55.687716   68713 cri.go:89] found id: ""
	I0815 18:39:55.687741   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.687749   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:55.687755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:55.687802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:55.725518   68713 cri.go:89] found id: ""
	I0815 18:39:55.725543   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.725553   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:55.725561   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:55.725631   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:55.758451   68713 cri.go:89] found id: ""
	I0815 18:39:55.758479   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.758490   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:55.758498   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:55.758560   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:55.792653   68713 cri.go:89] found id: ""
	I0815 18:39:55.792680   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.792687   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:55.792699   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:55.792710   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:55.832127   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:55.832156   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:55.885255   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:55.885289   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.898980   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:55.899009   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:55.967579   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:55.967609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:55.967624   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:55.348998   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:57.349656   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:56.654840   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.655471   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:56.158348   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.658194   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:00.658852   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.543524   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:58.556338   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:58.556412   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:58.593359   68713 cri.go:89] found id: ""
	I0815 18:39:58.593390   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.593401   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:58.593409   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:58.593472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:58.628446   68713 cri.go:89] found id: ""
	I0815 18:39:58.628471   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.628481   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:58.628504   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:58.628567   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:58.663930   68713 cri.go:89] found id: ""
	I0815 18:39:58.663954   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.663964   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:58.663971   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:58.664028   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:58.701070   68713 cri.go:89] found id: ""
	I0815 18:39:58.701095   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.701103   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:58.701108   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:58.701156   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:58.734427   68713 cri.go:89] found id: ""
	I0815 18:39:58.734457   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.734468   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:58.734476   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:58.734543   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:58.769121   68713 cri.go:89] found id: ""
	I0815 18:39:58.769144   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.769152   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:58.769162   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:58.769215   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:58.805771   68713 cri.go:89] found id: ""
	I0815 18:39:58.805796   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.805803   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:58.805808   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:58.805856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:58.840288   68713 cri.go:89] found id: ""
	I0815 18:39:58.840315   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.840325   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:58.840336   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:58.840351   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:58.895856   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:58.895893   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:58.909453   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:58.909478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:58.975939   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:58.975960   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:58.975971   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:59.055318   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:59.055353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.595588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:01.608625   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:01.608690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:01.646105   68713 cri.go:89] found id: ""
	I0815 18:40:01.646133   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.646144   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:01.646151   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:01.646214   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:01.685162   68713 cri.go:89] found id: ""
	I0815 18:40:01.685192   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.685202   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:01.685210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:01.685261   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:01.721452   68713 cri.go:89] found id: ""
	I0815 18:40:01.721479   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.721499   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:01.721507   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:01.721576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:01.762288   68713 cri.go:89] found id: ""
	I0815 18:40:01.762318   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.762331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:01.762339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:01.762429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:01.800547   68713 cri.go:89] found id: ""
	I0815 18:40:01.800579   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.800590   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:01.800598   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:01.800660   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:01.839182   68713 cri.go:89] found id: ""
	I0815 18:40:01.839214   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.839223   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:01.839229   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:01.839294   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:01.875364   68713 cri.go:89] found id: ""
	I0815 18:40:01.875390   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.875398   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:01.875404   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:01.875452   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:01.910485   68713 cri.go:89] found id: ""
	I0815 18:40:01.910512   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.910521   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:01.910535   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:01.910547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.951970   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:01.951998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:02.005720   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:02.005764   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:02.020941   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:02.020969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:02.101206   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:02.101224   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:02.101236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:59.850909   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:02.349180   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:00.659366   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:03.153614   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:03.158375   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:05.159868   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:04.687482   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:04.701501   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:04.701562   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:04.739613   68713 cri.go:89] found id: ""
	I0815 18:40:04.739636   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.739644   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:04.739650   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:04.739704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:04.774419   68713 cri.go:89] found id: ""
	I0815 18:40:04.774443   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.774453   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:04.774460   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:04.774522   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:04.809516   68713 cri.go:89] found id: ""
	I0815 18:40:04.809538   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.809547   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:04.809552   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:04.809612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:04.843822   68713 cri.go:89] found id: ""
	I0815 18:40:04.843850   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.843870   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:04.843878   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:04.843942   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:04.883853   68713 cri.go:89] found id: ""
	I0815 18:40:04.883881   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.883892   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:04.883900   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:04.883962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:04.918811   68713 cri.go:89] found id: ""
	I0815 18:40:04.918838   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.918846   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:04.918852   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:04.918903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:04.953076   68713 cri.go:89] found id: ""
	I0815 18:40:04.953101   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.953110   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:04.953116   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:04.953163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:04.988219   68713 cri.go:89] found id: ""
	I0815 18:40:04.988246   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.988255   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:04.988264   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:04.988275   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:05.060859   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:05.060896   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:05.060913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:05.146768   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:05.146817   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:05.187816   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:05.187845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:05.239027   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:05.239067   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:07.754503   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:07.769608   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:07.769695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:04.849108   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:06.850409   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:05.155042   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.654547   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:09.654825   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.658972   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:10.159255   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.804435   68713 cri.go:89] found id: ""
	I0815 18:40:07.804460   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.804468   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:07.804474   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:07.804551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:07.839760   68713 cri.go:89] found id: ""
	I0815 18:40:07.839787   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.839797   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:07.839804   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:07.839868   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:07.877984   68713 cri.go:89] found id: ""
	I0815 18:40:07.878009   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.878017   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:07.878022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:07.878070   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:07.914294   68713 cri.go:89] found id: ""
	I0815 18:40:07.914319   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.914328   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:07.914336   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:07.914395   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:07.948751   68713 cri.go:89] found id: ""
	I0815 18:40:07.948777   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.948787   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:07.948795   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:07.948861   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:07.982262   68713 cri.go:89] found id: ""
	I0815 18:40:07.982288   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.982296   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:07.982302   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:07.982358   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:08.015560   68713 cri.go:89] found id: ""
	I0815 18:40:08.015588   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.015596   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:08.015602   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:08.015662   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:08.049854   68713 cri.go:89] found id: ""
	I0815 18:40:08.049878   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.049885   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:08.049893   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:08.049905   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:08.102269   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:08.102303   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:08.117181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:08.117209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:08.188586   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:08.188609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:08.188623   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:08.272204   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:08.272239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:10.813223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:10.826181   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:10.826257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:10.863728   68713 cri.go:89] found id: ""
	I0815 18:40:10.863753   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.863761   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:10.863766   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:10.863813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:10.898074   68713 cri.go:89] found id: ""
	I0815 18:40:10.898102   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.898113   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:10.898121   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:10.898183   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:10.933948   68713 cri.go:89] found id: ""
	I0815 18:40:10.933980   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.933991   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:10.933998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:10.934059   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:10.972402   68713 cri.go:89] found id: ""
	I0815 18:40:10.972428   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.972436   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:10.972442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:10.972509   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:11.006814   68713 cri.go:89] found id: ""
	I0815 18:40:11.006843   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.006851   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:11.006857   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:11.006909   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:11.042739   68713 cri.go:89] found id: ""
	I0815 18:40:11.042763   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.042771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:11.042777   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:11.042835   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:11.079132   68713 cri.go:89] found id: ""
	I0815 18:40:11.079164   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.079173   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:11.079179   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:11.079228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:11.113271   68713 cri.go:89] found id: ""
	I0815 18:40:11.113298   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.113309   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:11.113317   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:11.113328   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:11.166669   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:11.166698   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:11.180789   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:11.180815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:11.247954   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:11.247985   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:11.247999   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:11.331952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:11.331995   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:09.349194   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:11.349627   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.850439   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:11.655088   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.656674   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:12.658287   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:15.158361   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.874466   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:13.888346   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:13.888416   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:13.922542   68713 cri.go:89] found id: ""
	I0815 18:40:13.922569   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.922579   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:13.922586   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:13.922654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:13.958039   68713 cri.go:89] found id: ""
	I0815 18:40:13.958066   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.958076   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:13.958082   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:13.958131   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:13.994095   68713 cri.go:89] found id: ""
	I0815 18:40:13.994125   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.994136   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:13.994144   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:13.994195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:14.027918   68713 cri.go:89] found id: ""
	I0815 18:40:14.027949   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.027960   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:14.027969   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:14.028027   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:14.063849   68713 cri.go:89] found id: ""
	I0815 18:40:14.063879   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.063889   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:14.063897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:14.063957   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:14.098444   68713 cri.go:89] found id: ""
	I0815 18:40:14.098473   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.098483   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:14.098490   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:14.098553   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:14.136834   68713 cri.go:89] found id: ""
	I0815 18:40:14.136861   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.136874   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:14.136880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:14.136925   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:14.172377   68713 cri.go:89] found id: ""
	I0815 18:40:14.172400   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.172408   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:14.172415   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:14.172430   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:14.212212   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:14.212242   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:14.268412   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:14.268450   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:14.282978   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:14.283006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:14.352777   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:14.352796   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:14.352822   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:16.939906   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:16.953118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:16.953178   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:16.991697   68713 cri.go:89] found id: ""
	I0815 18:40:16.991723   68713 logs.go:276] 0 containers: []
	W0815 18:40:16.991731   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:16.991736   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:16.991801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:17.027572   68713 cri.go:89] found id: ""
	I0815 18:40:17.027602   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.027613   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:17.027623   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:17.027682   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:17.060718   68713 cri.go:89] found id: ""
	I0815 18:40:17.060750   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.060763   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:17.060771   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:17.060829   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:17.096746   68713 cri.go:89] found id: ""
	I0815 18:40:17.096771   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.096780   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:17.096786   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:17.096846   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:17.130755   68713 cri.go:89] found id: ""
	I0815 18:40:17.130791   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.130802   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:17.130810   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:17.130872   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:17.167991   68713 cri.go:89] found id: ""
	I0815 18:40:17.168016   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.168026   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:17.168034   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:17.168093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:17.200695   68713 cri.go:89] found id: ""
	I0815 18:40:17.200722   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.200733   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:17.200741   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:17.200799   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:17.237788   68713 cri.go:89] found id: ""
	I0815 18:40:17.237816   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.237824   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:17.237833   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:17.237848   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:17.288888   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:17.288921   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:17.302862   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:17.302903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:17.370062   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:17.370085   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:17.370100   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:17.444742   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:17.444781   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:16.349749   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:18.849197   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:16.155555   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:18.654875   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:17.160009   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:19.657774   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:19.984813   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:19.998010   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:19.998077   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:20.032880   68713 cri.go:89] found id: ""
	I0815 18:40:20.032903   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.032912   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:20.032918   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:20.032973   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:20.069191   68713 cri.go:89] found id: ""
	I0815 18:40:20.069224   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.069236   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:20.069243   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:20.069301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:20.101930   68713 cri.go:89] found id: ""
	I0815 18:40:20.101954   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.101962   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:20.101968   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:20.102016   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:20.136981   68713 cri.go:89] found id: ""
	I0815 18:40:20.137006   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.137014   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:20.137020   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:20.137066   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:20.174517   68713 cri.go:89] found id: ""
	I0815 18:40:20.174543   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.174550   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:20.174556   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:20.174611   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:20.208525   68713 cri.go:89] found id: ""
	I0815 18:40:20.208549   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.208559   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:20.208567   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:20.208626   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:20.240824   68713 cri.go:89] found id: ""
	I0815 18:40:20.240855   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.240867   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:20.240874   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:20.240946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:20.277683   68713 cri.go:89] found id: ""
	I0815 18:40:20.277710   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.277720   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:20.277728   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:20.277739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:20.324271   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:20.324304   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:20.376250   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:20.376285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:20.392777   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:20.392813   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:20.464122   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:20.464156   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:20.464180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:20.849461   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:22.849591   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:20.654982   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.154537   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:21.658354   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.658505   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.041684   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:23.055779   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:23.055858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:23.095391   68713 cri.go:89] found id: ""
	I0815 18:40:23.095414   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.095426   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:23.095432   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:23.095483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:23.134907   68713 cri.go:89] found id: ""
	I0815 18:40:23.134936   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.134943   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:23.134949   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:23.134994   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:23.171806   68713 cri.go:89] found id: ""
	I0815 18:40:23.171845   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.171854   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:23.171861   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:23.171924   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:23.205378   68713 cri.go:89] found id: ""
	I0815 18:40:23.205404   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.205412   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:23.205417   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:23.205467   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:23.239503   68713 cri.go:89] found id: ""
	I0815 18:40:23.239531   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.239540   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:23.239547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:23.239614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:23.275802   68713 cri.go:89] found id: ""
	I0815 18:40:23.275828   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.275842   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:23.275849   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:23.275894   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:23.310127   68713 cri.go:89] found id: ""
	I0815 18:40:23.310154   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.310167   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:23.310173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:23.310219   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:23.344646   68713 cri.go:89] found id: ""
	I0815 18:40:23.344674   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.344685   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:23.344696   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:23.344711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:23.397260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:23.397310   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:23.425518   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:23.425553   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:23.495528   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:23.495547   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:23.495562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:23.574489   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:23.574524   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.119044   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:26.133806   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:26.133880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:26.175683   68713 cri.go:89] found id: ""
	I0815 18:40:26.175711   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.175722   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:26.175730   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:26.175789   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:26.210634   68713 cri.go:89] found id: ""
	I0815 18:40:26.210658   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.210665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:26.210671   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:26.210724   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:26.244146   68713 cri.go:89] found id: ""
	I0815 18:40:26.244176   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.244187   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:26.244195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:26.244274   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:26.277312   68713 cri.go:89] found id: ""
	I0815 18:40:26.277335   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.277343   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:26.277349   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:26.277410   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:26.311538   68713 cri.go:89] found id: ""
	I0815 18:40:26.311562   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.311570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:26.311576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:26.311623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:26.347816   68713 cri.go:89] found id: ""
	I0815 18:40:26.347840   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.347847   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:26.347853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:26.347906   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:26.381211   68713 cri.go:89] found id: ""
	I0815 18:40:26.381234   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.381242   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:26.381248   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:26.381303   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:26.413982   68713 cri.go:89] found id: ""
	I0815 18:40:26.414010   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.414018   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:26.414027   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:26.414038   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:26.500686   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:26.500721   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.537615   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:26.537642   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:26.590119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:26.590150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:26.603713   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:26.603739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:26.675455   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:25.349400   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:27.853388   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:25.155463   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:27.155580   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:29.156973   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:26.158898   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:28.658576   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:29.176084   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:29.189743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:29.189813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:29.225500   68713 cri.go:89] found id: ""
	I0815 18:40:29.225536   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.225548   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:29.225557   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:29.225614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:29.261839   68713 cri.go:89] found id: ""
	I0815 18:40:29.261866   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.261877   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:29.261884   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:29.261946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:29.296685   68713 cri.go:89] found id: ""
	I0815 18:40:29.296708   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.296716   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:29.296728   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:29.296787   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:29.332524   68713 cri.go:89] found id: ""
	I0815 18:40:29.332550   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.332558   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:29.332564   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:29.332615   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:29.368918   68713 cri.go:89] found id: ""
	I0815 18:40:29.368943   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.368953   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:29.368961   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:29.369020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:29.403175   68713 cri.go:89] found id: ""
	I0815 18:40:29.403200   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.403211   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:29.403218   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:29.403279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:29.438957   68713 cri.go:89] found id: ""
	I0815 18:40:29.438981   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.438989   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:29.438994   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:29.439051   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:29.472153   68713 cri.go:89] found id: ""
	I0815 18:40:29.472184   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.472195   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:29.472206   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:29.472221   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:29.560484   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:29.560547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:29.600366   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:29.600402   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:29.656536   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:29.656569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:29.669899   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:29.669925   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:29.738515   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:32.239207   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:32.253976   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:32.254048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:32.290918   68713 cri.go:89] found id: ""
	I0815 18:40:32.290942   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.290951   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:32.290957   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:32.291009   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:32.325567   68713 cri.go:89] found id: ""
	I0815 18:40:32.325596   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.325606   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:32.325613   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:32.325674   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:32.360959   68713 cri.go:89] found id: ""
	I0815 18:40:32.360994   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.361005   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:32.361015   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:32.361090   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:32.398583   68713 cri.go:89] found id: ""
	I0815 18:40:32.398614   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.398625   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:32.398633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:32.398696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:32.432980   68713 cri.go:89] found id: ""
	I0815 18:40:32.433007   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.433017   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:32.433024   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:32.433088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:32.467645   68713 cri.go:89] found id: ""
	I0815 18:40:32.467678   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.467688   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:32.467697   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:32.467757   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:32.504233   68713 cri.go:89] found id: ""
	I0815 18:40:32.504265   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.504275   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:32.504282   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:32.504347   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:32.539127   68713 cri.go:89] found id: ""
	I0815 18:40:32.539160   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.539175   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:32.539186   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:32.539200   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:32.620782   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:32.620818   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:32.660920   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:32.660950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:32.714392   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:32.714425   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:32.727629   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:32.727655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:40:30.349267   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:32.349896   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:31.655451   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:34.154871   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:31.157219   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:33.158733   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:35.158871   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	W0815 18:40:32.801258   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.301393   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:35.315460   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:35.315515   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:35.352266   68713 cri.go:89] found id: ""
	I0815 18:40:35.352287   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.352295   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:35.352301   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:35.352345   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:35.387274   68713 cri.go:89] found id: ""
	I0815 18:40:35.387305   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.387316   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:35.387324   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:35.387386   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:35.422376   68713 cri.go:89] found id: ""
	I0815 18:40:35.422403   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.422413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:35.422419   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:35.422464   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:35.456423   68713 cri.go:89] found id: ""
	I0815 18:40:35.456452   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.456459   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:35.456465   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:35.456544   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:35.494878   68713 cri.go:89] found id: ""
	I0815 18:40:35.494903   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.494912   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:35.494919   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:35.494980   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:35.528027   68713 cri.go:89] found id: ""
	I0815 18:40:35.528051   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.528062   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:35.528069   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:35.528128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:35.568543   68713 cri.go:89] found id: ""
	I0815 18:40:35.568570   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.568580   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:35.568587   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:35.568654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:35.627717   68713 cri.go:89] found id: ""
	I0815 18:40:35.627747   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.627766   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:35.627777   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:35.627792   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:35.691497   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:35.691530   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:35.705062   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:35.705092   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:35.783785   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.783806   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:35.783819   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:35.867282   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:35.867317   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:34.848226   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:36.849242   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.850686   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:36.154981   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.155165   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:37.659017   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:40.158408   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.407940   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:38.421571   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:38.421648   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:38.456551   68713 cri.go:89] found id: ""
	I0815 18:40:38.456586   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.456597   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:38.456604   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:38.456665   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:38.494133   68713 cri.go:89] found id: ""
	I0815 18:40:38.494167   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.494179   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:38.494186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:38.494253   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:38.531566   68713 cri.go:89] found id: ""
	I0815 18:40:38.531599   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.531610   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:38.531617   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:38.531678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:38.567613   68713 cri.go:89] found id: ""
	I0815 18:40:38.567640   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.567652   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:38.567659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:38.567717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:38.603172   68713 cri.go:89] found id: ""
	I0815 18:40:38.603201   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.603212   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:38.603225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:38.603284   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:38.639600   68713 cri.go:89] found id: ""
	I0815 18:40:38.639629   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.639640   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:38.639648   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:38.639710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:38.675780   68713 cri.go:89] found id: ""
	I0815 18:40:38.675811   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.675821   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:38.675828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:38.675885   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:38.708745   68713 cri.go:89] found id: ""
	I0815 18:40:38.708775   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.708786   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:38.708796   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:38.708815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:38.722485   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:38.722514   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:38.793913   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:38.793936   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:38.793950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:38.880706   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:38.880744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:38.919505   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:38.919533   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.472452   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:41.486204   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:41.486264   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:41.520251   68713 cri.go:89] found id: ""
	I0815 18:40:41.520282   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.520294   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:41.520302   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:41.520362   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:41.561294   68713 cri.go:89] found id: ""
	I0815 18:40:41.561325   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.561336   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:41.561343   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:41.561403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:41.595290   68713 cri.go:89] found id: ""
	I0815 18:40:41.595318   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.595326   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:41.595331   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:41.595381   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:41.629706   68713 cri.go:89] found id: ""
	I0815 18:40:41.629736   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.629744   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:41.629750   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:41.629816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:41.671862   68713 cri.go:89] found id: ""
	I0815 18:40:41.671885   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.671893   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:41.671898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:41.671951   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:41.710298   68713 cri.go:89] found id: ""
	I0815 18:40:41.710349   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.710360   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:41.710368   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:41.710425   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:41.745434   68713 cri.go:89] found id: ""
	I0815 18:40:41.745472   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.745487   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:41.745492   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:41.745548   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:41.781038   68713 cri.go:89] found id: ""
	I0815 18:40:41.781073   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.781081   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:41.781088   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:41.781099   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:41.863977   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:41.864023   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:41.907477   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:41.907505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.962921   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:41.962956   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:41.976458   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:41.976505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:42.044372   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:41.349260   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:43.349615   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:40.656633   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:43.154626   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:42.658519   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:44.659640   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:44.544803   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:44.559538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:44.559595   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:44.595471   68713 cri.go:89] found id: ""
	I0815 18:40:44.595501   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.595511   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:44.595518   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:44.595581   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:44.630148   68713 cri.go:89] found id: ""
	I0815 18:40:44.630173   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.630181   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:44.630189   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:44.630245   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:44.666084   68713 cri.go:89] found id: ""
	I0815 18:40:44.666110   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.666119   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:44.666126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:44.666180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:44.700286   68713 cri.go:89] found id: ""
	I0815 18:40:44.700320   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.700331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:44.700339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:44.700394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:44.734115   68713 cri.go:89] found id: ""
	I0815 18:40:44.734143   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.734151   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:44.734157   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:44.734216   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:44.770306   68713 cri.go:89] found id: ""
	I0815 18:40:44.770363   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.770376   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:44.770383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:44.770453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:44.806766   68713 cri.go:89] found id: ""
	I0815 18:40:44.806790   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.806798   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:44.806803   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:44.806865   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:44.843574   68713 cri.go:89] found id: ""
	I0815 18:40:44.843603   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.843613   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:44.843623   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:44.843638   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:44.896119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:44.896148   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:44.909537   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:44.909562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:44.980268   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:44.980290   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:44.980307   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:45.066589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:45.066626   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:47.605934   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:47.620644   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:47.620709   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:47.660939   68713 cri.go:89] found id: ""
	I0815 18:40:47.660960   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.660967   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:47.660973   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:47.661021   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:47.701018   68713 cri.go:89] found id: ""
	I0815 18:40:47.701047   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.701059   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:47.701107   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:47.701177   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:47.739487   68713 cri.go:89] found id: ""
	I0815 18:40:47.739514   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.739523   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:47.739528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:47.739584   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:47.781483   68713 cri.go:89] found id: ""
	I0815 18:40:47.781508   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.781515   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:47.781520   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:47.781571   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:45.850565   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.851368   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:45.156177   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.654437   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.157895   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:49.658101   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.816781   68713 cri.go:89] found id: ""
	I0815 18:40:47.816806   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.816813   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:47.816819   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:47.816875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:47.853951   68713 cri.go:89] found id: ""
	I0815 18:40:47.853976   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.853984   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:47.853990   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:47.854062   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:47.892208   68713 cri.go:89] found id: ""
	I0815 18:40:47.892237   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.892246   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:47.892252   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:47.892311   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:47.926916   68713 cri.go:89] found id: ""
	I0815 18:40:47.926944   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.926965   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:47.926976   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:47.926990   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:48.002907   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:48.002927   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:48.002942   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:48.085727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:48.085762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:48.127192   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:48.127224   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:48.180172   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:48.180208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:50.694573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:50.709411   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:50.709472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:50.750956   68713 cri.go:89] found id: ""
	I0815 18:40:50.750985   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.750994   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:50.751000   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:50.751048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:50.791072   68713 cri.go:89] found id: ""
	I0815 18:40:50.791149   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.791174   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:50.791186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:50.791247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:50.827692   68713 cri.go:89] found id: ""
	I0815 18:40:50.827717   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.827728   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:50.827735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:50.827794   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:50.866587   68713 cri.go:89] found id: ""
	I0815 18:40:50.866616   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.866626   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:50.866633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:50.866692   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:50.907012   68713 cri.go:89] found id: ""
	I0815 18:40:50.907040   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.907047   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:50.907053   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:50.907101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:50.951212   68713 cri.go:89] found id: ""
	I0815 18:40:50.951243   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.951256   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:50.951263   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:50.951316   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:50.989771   68713 cri.go:89] found id: ""
	I0815 18:40:50.989802   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.989812   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:50.989818   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:50.989867   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:51.024423   68713 cri.go:89] found id: ""
	I0815 18:40:51.024454   68713 logs.go:276] 0 containers: []
	W0815 18:40:51.024465   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:51.024475   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:51.024500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:51.076973   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:51.077012   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:51.090963   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:51.090989   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:51.169981   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:51.170005   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:51.170029   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:51.248990   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:51.249040   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:50.349092   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:52.350278   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:50.154517   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:52.148131   68248 pod_ready.go:82] duration metric: took 4m0.000077937s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" ...
	E0815 18:40:52.148161   68248 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 18:40:52.148183   68248 pod_ready.go:39] duration metric: took 4m13.224994468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:40:52.148235   68248 kubeadm.go:597] duration metric: took 4m20.945128985s to restartPrimaryControlPlane
	W0815 18:40:52.148324   68248 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 18:40:52.148376   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:40:51.660289   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:54.157718   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:53.790172   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:53.803752   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:53.803816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:53.843203   68713 cri.go:89] found id: ""
	I0815 18:40:53.843231   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.843246   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:53.843254   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:53.843314   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:53.878975   68713 cri.go:89] found id: ""
	I0815 18:40:53.879000   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.879008   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:53.879013   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:53.879078   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:53.915640   68713 cri.go:89] found id: ""
	I0815 18:40:53.915668   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.915675   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:53.915683   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:53.915746   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:53.956312   68713 cri.go:89] found id: ""
	I0815 18:40:53.956340   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.956356   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:53.956365   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:53.956426   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:53.992276   68713 cri.go:89] found id: ""
	I0815 18:40:53.992304   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.992314   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:53.992322   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:53.992387   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:54.034653   68713 cri.go:89] found id: ""
	I0815 18:40:54.034682   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.034693   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:54.034701   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:54.034761   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:54.072993   68713 cri.go:89] found id: ""
	I0815 18:40:54.073018   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.073027   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:54.073038   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:54.073107   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:54.107414   68713 cri.go:89] found id: ""
	I0815 18:40:54.107446   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.107456   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:54.107466   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:54.107481   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:54.145900   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:54.145928   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:54.197609   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:54.197639   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:54.211384   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:54.211410   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:54.280991   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:54.281018   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:54.281031   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:56.868270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:56.881168   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:56.881248   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:56.915206   68713 cri.go:89] found id: ""
	I0815 18:40:56.915235   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.915243   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:56.915249   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:56.915308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:56.950838   68713 cri.go:89] found id: ""
	I0815 18:40:56.950864   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.950873   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:56.950879   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:56.950937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:56.993625   68713 cri.go:89] found id: ""
	I0815 18:40:56.993649   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.993656   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:56.993662   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:56.993718   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:57.029109   68713 cri.go:89] found id: ""
	I0815 18:40:57.029139   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.029150   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:57.029158   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:57.029213   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:57.063480   68713 cri.go:89] found id: ""
	I0815 18:40:57.063518   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.063530   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:57.063538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:57.063598   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:57.102830   68713 cri.go:89] found id: ""
	I0815 18:40:57.102859   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.102870   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:57.102877   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:57.102938   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:57.137116   68713 cri.go:89] found id: ""
	I0815 18:40:57.137146   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.137159   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:57.137173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:57.137235   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:57.174678   68713 cri.go:89] found id: ""
	I0815 18:40:57.174706   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.174717   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:57.174727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:57.174741   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:57.213270   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:57.213311   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:57.269463   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:57.269500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:57.283891   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:57.283915   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:57.355563   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:57.355589   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:57.355601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:54.849266   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:57.350343   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:56.657843   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:58.658098   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:59.943493   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:59.957225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:59.957285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:59.993113   68713 cri.go:89] found id: ""
	I0815 18:40:59.993142   68713 logs.go:276] 0 containers: []
	W0815 18:40:59.993153   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:59.993167   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:59.993228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:00.033485   68713 cri.go:89] found id: ""
	I0815 18:41:00.033515   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.033525   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:00.033533   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:00.033594   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:00.070808   68713 cri.go:89] found id: ""
	I0815 18:41:00.070830   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.070838   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:00.070844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:00.070893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:00.113043   68713 cri.go:89] found id: ""
	I0815 18:41:00.113067   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.113076   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:00.113082   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:00.113139   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:00.148089   68713 cri.go:89] found id: ""
	I0815 18:41:00.148118   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.148129   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:00.148136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:00.148206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:00.188343   68713 cri.go:89] found id: ""
	I0815 18:41:00.188375   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.188386   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:00.188394   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:00.188448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:00.224287   68713 cri.go:89] found id: ""
	I0815 18:41:00.224312   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.224323   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:00.224337   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:00.224398   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:00.263983   68713 cri.go:89] found id: ""
	I0815 18:41:00.264008   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.264016   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:00.264025   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:00.264037   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:00.278057   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:00.278083   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:00.355112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:00.355133   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:00.355146   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:00.436636   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:00.436672   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:00.474774   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:00.474801   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:59.849797   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:02.349363   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:01.158004   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:03.158380   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:05.658860   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:03.027434   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:03.041422   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:03.041496   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:03.074093   68713 cri.go:89] found id: ""
	I0815 18:41:03.074119   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.074130   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:03.074138   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:03.074198   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:03.111489   68713 cri.go:89] found id: ""
	I0815 18:41:03.111517   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.111529   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:03.111537   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:03.111599   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:03.147716   68713 cri.go:89] found id: ""
	I0815 18:41:03.147747   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.147756   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:03.147762   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:03.147825   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:03.184609   68713 cri.go:89] found id: ""
	I0815 18:41:03.184635   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.184644   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:03.184652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:03.184710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:03.221839   68713 cri.go:89] found id: ""
	I0815 18:41:03.221869   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.221878   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:03.221883   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:03.221935   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:03.262619   68713 cri.go:89] found id: ""
	I0815 18:41:03.262649   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.262661   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:03.262669   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:03.262733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:03.297826   68713 cri.go:89] found id: ""
	I0815 18:41:03.297849   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.297864   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:03.297875   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:03.297922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:03.345046   68713 cri.go:89] found id: ""
	I0815 18:41:03.345074   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.345083   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:03.345095   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:03.345133   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:03.416878   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:03.416905   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:03.416920   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:03.491548   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:03.491583   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:03.533821   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:03.533852   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:03.587749   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:03.587787   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.104002   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:06.118123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:06.118195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:06.156179   68713 cri.go:89] found id: ""
	I0815 18:41:06.156204   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.156213   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:06.156218   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:06.156275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:06.192834   68713 cri.go:89] found id: ""
	I0815 18:41:06.192858   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.192866   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:06.192871   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:06.192918   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:06.228355   68713 cri.go:89] found id: ""
	I0815 18:41:06.228379   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.228387   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:06.228393   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:06.228453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:06.262041   68713 cri.go:89] found id: ""
	I0815 18:41:06.262068   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.262079   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:06.262086   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:06.262152   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:06.303217   68713 cri.go:89] found id: ""
	I0815 18:41:06.303249   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.303261   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:06.303268   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:06.303335   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:06.337180   68713 cri.go:89] found id: ""
	I0815 18:41:06.337208   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.337215   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:06.337222   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:06.337270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:06.375054   68713 cri.go:89] found id: ""
	I0815 18:41:06.375081   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.375088   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:06.375095   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:06.375163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:06.412188   68713 cri.go:89] found id: ""
	I0815 18:41:06.412216   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.412227   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:06.412239   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:06.412255   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.425607   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:06.425633   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:06.500853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:06.500872   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:06.500883   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:06.577297   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:06.577333   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:06.620209   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:06.620239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:04.848677   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:06.849254   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:08.849300   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:08.157734   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:10.157969   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:09.171606   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:09.184197   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:09.184257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:09.217865   68713 cri.go:89] found id: ""
	I0815 18:41:09.217893   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.217904   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:09.217912   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:09.217967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:09.254032   68713 cri.go:89] found id: ""
	I0815 18:41:09.254055   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.254064   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:09.254073   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:09.254128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:09.291772   68713 cri.go:89] found id: ""
	I0815 18:41:09.291798   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.291808   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:09.291816   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:09.291880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:09.326695   68713 cri.go:89] found id: ""
	I0815 18:41:09.326717   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.326726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:09.326731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:09.326791   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:09.365779   68713 cri.go:89] found id: ""
	I0815 18:41:09.365807   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.365818   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:09.365825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:09.365880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:09.413475   68713 cri.go:89] found id: ""
	I0815 18:41:09.413500   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.413509   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:09.413514   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:09.413578   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:09.449483   68713 cri.go:89] found id: ""
	I0815 18:41:09.449511   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.449521   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:09.449528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:09.449623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:09.487484   68713 cri.go:89] found id: ""
	I0815 18:41:09.487513   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.487525   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:09.487535   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:09.487549   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:09.536746   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:09.536777   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:09.549912   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:09.549944   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:09.619192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:09.619227   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:09.619246   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:09.698370   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:09.698404   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:12.240745   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:12.254814   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:12.254875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:12.291346   68713 cri.go:89] found id: ""
	I0815 18:41:12.291376   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.291387   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:12.291395   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:12.291456   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:12.324832   68713 cri.go:89] found id: ""
	I0815 18:41:12.324867   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.324878   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:12.324886   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:12.324950   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:12.360172   68713 cri.go:89] found id: ""
	I0815 18:41:12.360193   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.360201   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:12.360206   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:12.360251   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:12.394671   68713 cri.go:89] found id: ""
	I0815 18:41:12.394700   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.394710   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:12.394731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:12.394800   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:12.428951   68713 cri.go:89] found id: ""
	I0815 18:41:12.428999   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.429007   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:12.429013   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:12.429057   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:12.466035   68713 cri.go:89] found id: ""
	I0815 18:41:12.466061   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.466069   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:12.466075   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:12.466125   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:12.500003   68713 cri.go:89] found id: ""
	I0815 18:41:12.500031   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.500042   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:12.500050   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:12.500105   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:12.537433   68713 cri.go:89] found id: ""
	I0815 18:41:12.537457   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.537464   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:12.537473   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:12.537484   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:12.586768   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:12.586809   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:12.600549   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:12.600578   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:12.673112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:12.673138   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:12.673154   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:12.754689   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:12.754726   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:11.348767   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:13.349973   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:12.158249   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:13.158354   68429 pod_ready.go:82] duration metric: took 4m0.006607137s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	E0815 18:41:13.158373   68429 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 18:41:13.158381   68429 pod_ready.go:39] duration metric: took 4m7.064501997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:13.158395   68429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:13.158423   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:13.158467   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:13.203746   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:13.203771   68429 cri.go:89] found id: ""
	I0815 18:41:13.203779   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:13.203840   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.208188   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:13.208248   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:13.245326   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:13.245351   68429 cri.go:89] found id: ""
	I0815 18:41:13.245359   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:13.245412   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.250212   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:13.250281   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:13.296537   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:13.296565   68429 cri.go:89] found id: ""
	I0815 18:41:13.296576   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:13.296634   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.300823   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:13.300881   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:13.337973   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:13.338018   68429 cri.go:89] found id: ""
	I0815 18:41:13.338031   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:13.338083   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.342251   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:13.342307   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:13.379921   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:13.379948   68429 cri.go:89] found id: ""
	I0815 18:41:13.379957   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:13.380005   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.384451   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:13.384539   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:13.421077   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:13.421113   68429 cri.go:89] found id: ""
	I0815 18:41:13.421122   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:13.421180   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.425566   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:13.425640   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:13.468663   68429 cri.go:89] found id: ""
	I0815 18:41:13.468688   68429 logs.go:276] 0 containers: []
	W0815 18:41:13.468696   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:13.468701   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:13.468753   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:13.506689   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:13.506711   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:13.506715   68429 cri.go:89] found id: ""
	I0815 18:41:13.506723   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:13.506784   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.511177   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.515519   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:13.515543   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:13.583771   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:13.583806   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:13.714906   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:13.714945   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:13.766512   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:13.766548   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:13.818416   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:13.818450   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:13.859035   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:13.859073   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:13.901515   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:13.901546   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:14.437262   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:14.437304   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:14.453511   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:14.453551   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:14.489238   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:14.489267   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:14.540141   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:14.540184   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:14.574758   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:14.574785   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:14.609370   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:14.609398   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:15.294667   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:15.307758   68713 kubeadm.go:597] duration metric: took 4m2.67500099s to restartPrimaryControlPlane
	W0815 18:41:15.307840   68713 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 18:41:15.307872   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:41:15.761255   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:15.776049   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:41:15.786643   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:41:15.796517   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:41:15.796537   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:41:15.796585   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:41:15.806118   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:41:15.806167   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:41:15.816363   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:41:15.826396   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:41:15.826449   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:41:15.836538   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.847035   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:41:15.847093   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.857475   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:41:15.867084   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:41:15.867144   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:41:15.879736   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:41:15.954497   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:41:15.954588   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:41:16.098128   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:41:16.098244   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:41:16.098345   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:41:16.288507   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:41:16.290439   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:41:16.290555   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:41:16.290656   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:41:16.290756   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:41:16.290831   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:41:16.290923   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:41:16.291003   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:41:16.291096   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:41:16.291182   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:41:16.291280   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:41:16.291396   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:41:16.291457   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:41:16.291509   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:41:16.363570   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:41:16.549782   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:41:16.789250   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:41:16.983388   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:41:17.004293   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:41:17.006438   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:41:17.006485   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:41:17.154583   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:41:17.156594   68713 out.go:235]   - Booting up control plane ...
	I0815 18:41:17.156717   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:41:17.177351   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:41:17.179286   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:41:17.180313   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:41:17.183829   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:41:15.850424   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:18.348986   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:18.430273   68248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.281857018s)
	I0815 18:41:18.430359   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:18.445633   68248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:41:18.457459   68248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:41:18.469748   68248 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:41:18.469769   68248 kubeadm.go:157] found existing configuration files:
	
	I0815 18:41:18.469818   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:41:18.480099   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:41:18.480146   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:41:18.491871   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:41:18.501274   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:41:18.501339   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:41:18.510186   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:41:18.518803   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:41:18.518863   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:41:18.527843   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:41:18.536437   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:41:18.536514   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:41:18.545573   68248 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:41:18.596478   68248 kubeadm.go:310] W0815 18:41:18.577134    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:41:18.597311   68248 kubeadm.go:310] W0815 18:41:18.577958    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:41:18.709937   68248 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:41:17.151343   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:17.173653   68429 api_server.go:72] duration metric: took 4m18.293407117s to wait for apiserver process to appear ...
	I0815 18:41:17.173677   68429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:41:17.173724   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:17.173784   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:17.211484   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:17.211509   68429 cri.go:89] found id: ""
	I0815 18:41:17.211518   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:17.211583   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.216011   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:17.216107   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:17.265454   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:17.265486   68429 cri.go:89] found id: ""
	I0815 18:41:17.265497   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:17.265554   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.269804   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:17.269868   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:17.310339   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:17.310363   68429 cri.go:89] found id: ""
	I0815 18:41:17.310371   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:17.310435   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.315639   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:17.315695   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:17.352364   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:17.352387   68429 cri.go:89] found id: ""
	I0815 18:41:17.352396   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:17.352452   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.356782   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:17.356848   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:17.396704   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:17.396733   68429 cri.go:89] found id: ""
	I0815 18:41:17.396744   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:17.396799   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.400920   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:17.400985   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:17.440361   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:17.440390   68429 cri.go:89] found id: ""
	I0815 18:41:17.440400   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:17.440464   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.445057   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:17.445127   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:17.487341   68429 cri.go:89] found id: ""
	I0815 18:41:17.487369   68429 logs.go:276] 0 containers: []
	W0815 18:41:17.487380   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:17.487388   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:17.487446   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:17.528197   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:17.528218   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:17.528223   68429 cri.go:89] found id: ""
	I0815 18:41:17.528229   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:17.528285   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.532536   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.536745   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:17.536768   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:17.574236   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:17.574268   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:17.617822   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:17.617853   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:17.673009   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:17.673037   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:17.717620   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:17.717647   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:17.764641   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:17.764671   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:17.815586   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:17.815618   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:17.855287   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:17.855310   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:17.906486   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:17.906514   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:17.941540   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:17.941562   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:18.373461   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:18.373497   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:18.454203   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:18.454244   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:18.470284   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:18.470315   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:20.349635   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:22.350034   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:21.080947   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:41:21.085334   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0815 18:41:21.086420   68429 api_server.go:141] control plane version: v1.31.0
	I0815 18:41:21.086442   68429 api_server.go:131] duration metric: took 3.912756949s to wait for apiserver health ...
	I0815 18:41:21.086452   68429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:41:21.086481   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:21.086537   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:21.124183   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:21.124210   68429 cri.go:89] found id: ""
	I0815 18:41:21.124218   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:21.124285   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.128402   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:21.128472   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:21.164737   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:21.164768   68429 cri.go:89] found id: ""
	I0815 18:41:21.164779   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:21.164835   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.170622   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:21.170699   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:21.206823   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:21.206847   68429 cri.go:89] found id: ""
	I0815 18:41:21.206855   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:21.206910   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.211055   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:21.211128   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:21.255529   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:21.255555   68429 cri.go:89] found id: ""
	I0815 18:41:21.255565   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:21.255629   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.260062   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:21.260139   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:21.298058   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:21.298116   68429 cri.go:89] found id: ""
	I0815 18:41:21.298124   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:21.298180   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.302821   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:21.302892   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:21.340895   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:21.340925   68429 cri.go:89] found id: ""
	I0815 18:41:21.340936   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:21.341003   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.345545   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:21.345638   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:21.383180   68429 cri.go:89] found id: ""
	I0815 18:41:21.383212   68429 logs.go:276] 0 containers: []
	W0815 18:41:21.383223   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:21.383232   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:21.383301   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:21.421152   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:21.421178   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:21.421185   68429 cri.go:89] found id: ""
	I0815 18:41:21.421198   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:21.421257   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.426326   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.430307   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:21.430351   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:21.562655   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:21.562697   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:21.613436   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:21.613470   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:21.674678   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:21.674721   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:21.717283   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:21.717316   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:21.760218   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:21.760249   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:21.802313   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:21.802352   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:21.874565   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:21.874608   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:21.891629   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:21.891666   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:21.934128   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:21.934170   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:21.985467   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:21.985497   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:22.023731   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:22.023770   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:22.403584   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:22.403626   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:25.005734   68429 system_pods.go:59] 8 kube-system pods found
	I0815 18:41:25.005760   68429 system_pods.go:61] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running
	I0815 18:41:25.005766   68429 system_pods.go:61] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running
	I0815 18:41:25.005770   68429 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running
	I0815 18:41:25.005775   68429 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running
	I0815 18:41:25.005778   68429 system_pods.go:61] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:41:25.005781   68429 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running
	I0815 18:41:25.005788   68429 system_pods.go:61] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:25.005793   68429 system_pods.go:61] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:41:25.005799   68429 system_pods.go:74] duration metric: took 3.919341536s to wait for pod list to return data ...
	I0815 18:41:25.005806   68429 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:41:25.008398   68429 default_sa.go:45] found service account: "default"
	I0815 18:41:25.008419   68429 default_sa.go:55] duration metric: took 2.608281ms for default service account to be created ...
	I0815 18:41:25.008427   68429 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:41:25.012784   68429 system_pods.go:86] 8 kube-system pods found
	I0815 18:41:25.012804   68429 system_pods.go:89] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running
	I0815 18:41:25.012810   68429 system_pods.go:89] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running
	I0815 18:41:25.012817   68429 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running
	I0815 18:41:25.012821   68429 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running
	I0815 18:41:25.012825   68429 system_pods.go:89] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:41:25.012828   68429 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running
	I0815 18:41:25.012834   68429 system_pods.go:89] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:25.012838   68429 system_pods.go:89] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:41:25.012850   68429 system_pods.go:126] duration metric: took 4.415694ms to wait for k8s-apps to be running ...
	I0815 18:41:25.012858   68429 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:41:25.012905   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:25.028245   68429 system_svc.go:56] duration metric: took 15.378403ms WaitForService to wait for kubelet
	I0815 18:41:25.028272   68429 kubeadm.go:582] duration metric: took 4m26.148030358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:41:25.028290   68429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:41:25.030696   68429 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:41:25.030717   68429 node_conditions.go:123] node cpu capacity is 2
	I0815 18:41:25.030728   68429 node_conditions.go:105] duration metric: took 2.43352ms to run NodePressure ...
	I0815 18:41:25.030742   68429 start.go:241] waiting for startup goroutines ...
	I0815 18:41:25.030751   68429 start.go:246] waiting for cluster config update ...
	I0815 18:41:25.030768   68429 start.go:255] writing updated cluster config ...
	I0815 18:41:25.031028   68429 ssh_runner.go:195] Run: rm -f paused
	I0815 18:41:25.077910   68429 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:41:25.079973   68429 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-423062" cluster and "default" namespace by default
	I0815 18:41:27.911884   68248 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 18:41:27.911943   68248 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:41:27.912011   68248 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:41:27.912130   68248 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:41:27.912272   68248 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 18:41:27.912359   68248 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:41:27.913884   68248 out.go:235]   - Generating certificates and keys ...
	I0815 18:41:27.913990   68248 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:41:27.914092   68248 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:41:27.914197   68248 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:41:27.914289   68248 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:41:27.914362   68248 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:41:27.914433   68248 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:41:27.914521   68248 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:41:27.914606   68248 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:41:27.914859   68248 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:41:27.914984   68248 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:41:27.915040   68248 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:41:27.915119   68248 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:41:27.915190   68248 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:41:27.915268   68248 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 18:41:27.915336   68248 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:41:27.915419   68248 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:41:27.915500   68248 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:41:27.915606   68248 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:41:27.915691   68248 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:41:27.917229   68248 out.go:235]   - Booting up control plane ...
	I0815 18:41:27.917311   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:41:27.917377   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:41:27.917433   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:41:27.917521   68248 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:41:27.917590   68248 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:41:27.917623   68248 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:41:27.917740   68248 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 18:41:27.917829   68248 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 18:41:27.917880   68248 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00200618s
	I0815 18:41:27.917954   68248 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 18:41:27.918011   68248 kubeadm.go:310] [api-check] The API server is healthy after 5.501605719s
	I0815 18:41:27.918122   68248 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 18:41:27.918268   68248 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 18:41:27.918361   68248 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 18:41:27.918626   68248 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-555028 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 18:41:27.918723   68248 kubeadm.go:310] [bootstrap-token] Using token: 99xu37.bm6hiisu91f6rbvd
	I0815 18:41:27.920248   68248 out.go:235]   - Configuring RBAC rules ...
	I0815 18:41:27.920360   68248 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 18:41:27.920467   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 18:41:27.920651   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 18:41:27.920785   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 18:41:27.920938   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 18:41:27.921052   68248 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 18:41:27.921225   68248 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 18:41:27.921286   68248 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 18:41:27.921356   68248 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 18:41:27.921369   68248 kubeadm.go:310] 
	I0815 18:41:27.921422   68248 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 18:41:27.921428   68248 kubeadm.go:310] 
	I0815 18:41:27.921488   68248 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 18:41:27.921497   68248 kubeadm.go:310] 
	I0815 18:41:27.921521   68248 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 18:41:27.921570   68248 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 18:41:27.921619   68248 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 18:41:27.921625   68248 kubeadm.go:310] 
	I0815 18:41:27.921698   68248 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 18:41:27.921711   68248 kubeadm.go:310] 
	I0815 18:41:27.921776   68248 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 18:41:27.921787   68248 kubeadm.go:310] 
	I0815 18:41:27.921858   68248 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 18:41:27.921963   68248 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 18:41:27.922055   68248 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 18:41:27.922064   68248 kubeadm.go:310] 
	I0815 18:41:27.922166   68248 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 18:41:27.922281   68248 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 18:41:27.922306   68248 kubeadm.go:310] 
	I0815 18:41:27.922413   68248 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 99xu37.bm6hiisu91f6rbvd \
	I0815 18:41:27.922550   68248 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 \
	I0815 18:41:27.922593   68248 kubeadm.go:310] 	--control-plane 
	I0815 18:41:27.922603   68248 kubeadm.go:310] 
	I0815 18:41:27.922703   68248 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 18:41:27.922712   68248 kubeadm.go:310] 
	I0815 18:41:27.922800   68248 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 99xu37.bm6hiisu91f6rbvd \
	I0815 18:41:27.922901   68248 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 
	I0815 18:41:27.922909   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:41:27.922916   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:41:27.924596   68248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:41:24.849483   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:27.350715   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:27.926142   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:41:27.938307   68248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:41:27.958862   68248 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:41:27.958974   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:27.959032   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-555028 minikube.k8s.io/updated_at=2024_08_15T18_41_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=embed-certs-555028 minikube.k8s.io/primary=true
	I0815 18:41:28.156844   68248 ops.go:34] apiserver oom_adj: -16
	I0815 18:41:28.157122   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:28.657735   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:29.157713   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:29.658109   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:30.157486   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:30.657573   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.157463   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.658073   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.757929   68248 kubeadm.go:1113] duration metric: took 3.799012728s to wait for elevateKubeSystemPrivileges
	I0815 18:41:31.757969   68248 kubeadm.go:394] duration metric: took 5m0.607372858s to StartCluster
	I0815 18:41:31.757992   68248 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:41:31.758070   68248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:41:31.759686   68248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:41:31.759915   68248 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:41:31.759982   68248 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:41:31.760072   68248 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-555028"
	I0815 18:41:31.760090   68248 addons.go:69] Setting default-storageclass=true in profile "embed-certs-555028"
	I0815 18:41:31.760115   68248 addons.go:69] Setting metrics-server=true in profile "embed-certs-555028"
	I0815 18:41:31.760133   68248 addons.go:234] Setting addon metrics-server=true in "embed-certs-555028"
	W0815 18:41:31.760141   68248 addons.go:243] addon metrics-server should already be in state true
	I0815 18:41:31.760148   68248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-555028"
	I0815 18:41:31.760174   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.760110   68248 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-555028"
	W0815 18:41:31.760230   68248 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:41:31.760270   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.760108   68248 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:41:31.760603   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760619   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760637   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.760642   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.760658   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760708   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.761566   68248 out.go:177] * Verifying Kubernetes components...
	I0815 18:41:31.762780   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:41:31.777893   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I0815 18:41:31.778444   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.779021   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.779049   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.779496   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.780129   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.780182   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.780954   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0815 18:41:31.781146   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0815 18:41:31.781506   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.781586   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.782056   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.782061   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.782078   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.782079   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.782437   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.782494   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.782685   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.783004   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.783034   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.786246   68248 addons.go:234] Setting addon default-storageclass=true in "embed-certs-555028"
	W0815 18:41:31.786270   68248 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:41:31.786300   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.786682   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.786714   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.800152   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36619
	I0815 18:41:31.800729   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.801272   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.801295   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.801656   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.801835   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.803539   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0815 18:41:31.803751   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.804058   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.804640   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.804660   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.805007   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.805157   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.806098   68248 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:41:31.806397   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0815 18:41:31.806814   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.807269   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.807450   68248 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:41:31.807466   68248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:41:31.807484   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.807744   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.807757   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.808066   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.808889   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.808923   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.809143   68248 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:41:31.810575   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:41:31.810593   68248 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:41:31.810619   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.810648   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.811760   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.811761   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.811802   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.811948   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.812101   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.812243   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.814211   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.814653   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.814675   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.814953   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.815117   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.815271   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.815391   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.829657   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
	I0815 18:41:31.830122   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.830710   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.830734   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.831077   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.831291   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.833016   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.833271   68248 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:41:31.833285   68248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:41:31.833302   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.836248   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.836655   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.836682   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.836908   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.837097   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.837233   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.837410   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.988466   68248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:41:32.010147   68248 node_ready.go:35] waiting up to 6m0s for node "embed-certs-555028" to be "Ready" ...
	I0815 18:41:32.019505   68248 node_ready.go:49] node "embed-certs-555028" has status "Ready":"True"
	I0815 18:41:32.019529   68248 node_ready.go:38] duration metric: took 9.346825ms for node "embed-certs-555028" to be "Ready" ...
	I0815 18:41:32.019541   68248 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:32.032036   68248 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:32.125991   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:41:32.138532   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:41:32.138554   68248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:41:32.155222   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:41:32.196478   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:41:32.196517   68248 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:41:32.270461   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:41:32.270495   68248 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:41:32.405567   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:41:33.205712   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.050454437s)
	I0815 18:41:33.205772   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.205785   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.205793   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.079759984s)
	I0815 18:41:33.205826   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.205838   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206153   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.206169   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206184   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206194   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206200   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.206205   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206210   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206218   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.206202   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.206226   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206415   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206421   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206430   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206432   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.245033   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.245061   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.245328   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.245343   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.651886   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246273862s)
	I0815 18:41:33.651945   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.651960   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.652264   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.652307   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.652311   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.652326   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.652335   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.652618   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.652640   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.652650   68248 addons.go:475] Verifying addon metrics-server=true in "embed-certs-555028"
	I0815 18:41:33.652697   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.654487   68248 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 18:41:29.848462   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:31.850877   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:33.655868   68248 addons.go:510] duration metric: took 1.89588756s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 18:41:34.044605   68248 pod_ready.go:103] pod "etcd-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:34.538170   68248 pod_ready.go:93] pod "etcd-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.538199   68248 pod_ready.go:82] duration metric: took 2.506135047s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.538212   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.543160   68248 pod_ready.go:93] pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.543182   68248 pod_ready.go:82] duration metric: took 4.962289ms for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.543195   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.547126   68248 pod_ready.go:93] pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.547144   68248 pod_ready.go:82] duration metric: took 3.94279ms for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.547152   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:36.553459   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:37.555276   68248 pod_ready.go:93] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:37.555299   68248 pod_ready.go:82] duration metric: took 3.008140869s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:37.555307   68248 pod_ready.go:39] duration metric: took 5.535754922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:37.555330   68248 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:37.555378   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:37.575318   68248 api_server.go:72] duration metric: took 5.815371975s to wait for apiserver process to appear ...
	I0815 18:41:37.575344   68248 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:41:37.575361   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:41:37.580989   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I0815 18:41:37.582142   68248 api_server.go:141] control plane version: v1.31.0
	I0815 18:41:37.582164   68248 api_server.go:131] duration metric: took 6.812732ms to wait for apiserver health ...
	I0815 18:41:37.582174   68248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:41:37.589334   68248 system_pods.go:59] 9 kube-system pods found
	I0815 18:41:37.589366   68248 system_pods.go:61] "coredns-6f6b679f8f-mf6q4" [a5f7f959-715b-48a1-9f85-f267614182f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:41:37.589377   68248 system_pods.go:61] "coredns-6f6b679f8f-rc947" [3d041322-9d6b-4f46-8f58-e2991f34a297] Running
	I0815 18:41:37.589385   68248 system_pods.go:61] "etcd-embed-certs-555028" [8b533be4-dc0d-4b5e-af13-4efde0ddca33] Running
	I0815 18:41:37.589390   68248 system_pods.go:61] "kube-apiserver-embed-certs-555028" [6cbda2fc-5bf8-42d3-acee-fbf45de39d08] Running
	I0815 18:41:37.589397   68248 system_pods.go:61] "kube-controller-manager-embed-certs-555028" [e1246479-31dd-4437-b32f-4709fa627284] Running
	I0815 18:41:37.589403   68248 system_pods.go:61] "kube-proxy-ktczt" [f5e5b692-edd5-48fd-879b-7b8da4dea9fd] Running
	I0815 18:41:37.589410   68248 system_pods.go:61] "kube-scheduler-embed-certs-555028" [046100d7-8f69-4bff-8d48-c088c27b7601] Running
	I0815 18:41:37.589422   68248 system_pods.go:61] "metrics-server-6867b74b74-zkpx5" [92e18af9-7bd1-4891-b551-06ba4b293560] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:37.589431   68248 system_pods.go:61] "storage-provisioner" [d6979830-492e-4ef7-960f-2d4756de1c8f] Running
	I0815 18:41:37.589439   68248 system_pods.go:74] duration metric: took 7.257758ms to wait for pod list to return data ...
	I0815 18:41:37.589450   68248 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:41:37.592468   68248 default_sa.go:45] found service account: "default"
	I0815 18:41:37.592500   68248 default_sa.go:55] duration metric: took 3.029278ms for default service account to be created ...
	I0815 18:41:37.592511   68248 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:41:37.597697   68248 system_pods.go:86] 9 kube-system pods found
	I0815 18:41:37.597725   68248 system_pods.go:89] "coredns-6f6b679f8f-mf6q4" [a5f7f959-715b-48a1-9f85-f267614182f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:41:37.597730   68248 system_pods.go:89] "coredns-6f6b679f8f-rc947" [3d041322-9d6b-4f46-8f58-e2991f34a297] Running
	I0815 18:41:37.597736   68248 system_pods.go:89] "etcd-embed-certs-555028" [8b533be4-dc0d-4b5e-af13-4efde0ddca33] Running
	I0815 18:41:37.597740   68248 system_pods.go:89] "kube-apiserver-embed-certs-555028" [6cbda2fc-5bf8-42d3-acee-fbf45de39d08] Running
	I0815 18:41:37.597744   68248 system_pods.go:89] "kube-controller-manager-embed-certs-555028" [e1246479-31dd-4437-b32f-4709fa627284] Running
	I0815 18:41:37.597747   68248 system_pods.go:89] "kube-proxy-ktczt" [f5e5b692-edd5-48fd-879b-7b8da4dea9fd] Running
	I0815 18:41:37.597751   68248 system_pods.go:89] "kube-scheduler-embed-certs-555028" [046100d7-8f69-4bff-8d48-c088c27b7601] Running
	I0815 18:41:37.597756   68248 system_pods.go:89] "metrics-server-6867b74b74-zkpx5" [92e18af9-7bd1-4891-b551-06ba4b293560] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:37.597763   68248 system_pods.go:89] "storage-provisioner" [d6979830-492e-4ef7-960f-2d4756de1c8f] Running
	I0815 18:41:37.597769   68248 system_pods.go:126] duration metric: took 5.252997ms to wait for k8s-apps to be running ...
	I0815 18:41:37.597779   68248 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:41:37.597819   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:37.616004   68248 system_svc.go:56] duration metric: took 18.217091ms WaitForService to wait for kubelet
	I0815 18:41:37.616032   68248 kubeadm.go:582] duration metric: took 5.856091444s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:41:37.616049   68248 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:41:37.619195   68248 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:41:37.619215   68248 node_conditions.go:123] node cpu capacity is 2
	I0815 18:41:37.619223   68248 node_conditions.go:105] duration metric: took 3.169759ms to run NodePressure ...
	I0815 18:41:37.619234   68248 start.go:241] waiting for startup goroutines ...
	I0815 18:41:37.619242   68248 start.go:246] waiting for cluster config update ...
	I0815 18:41:37.619253   68248 start.go:255] writing updated cluster config ...
	I0815 18:41:37.619520   68248 ssh_runner.go:195] Run: rm -f paused
	I0815 18:41:37.669469   68248 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:41:37.671485   68248 out.go:177] * Done! kubectl is now configured to use "embed-certs-555028" cluster and "default" namespace by default
	I0815 18:41:34.350702   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:36.849248   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:39.348684   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:41.349379   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:43.848932   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:46.348801   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:48.349736   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:50.848728   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:52.850583   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:57.184855   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:41:57.185437   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:41:57.185667   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:41:54.851200   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:57.349542   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:42:02.186077   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:02.186272   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:41:59.349724   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:59.349748   67936 pod_ready.go:82] duration metric: took 4m0.007281981s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	E0815 18:41:59.349757   67936 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 18:41:59.349763   67936 pod_ready.go:39] duration metric: took 4m1.606987494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:59.349779   67936 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:59.349802   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:59.349844   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:59.395509   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:41:59.395541   67936 cri.go:89] found id: ""
	I0815 18:41:59.395552   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:41:59.395608   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.400063   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:59.400140   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:59.435356   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:41:59.435379   67936 cri.go:89] found id: ""
	I0815 18:41:59.435386   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:41:59.435431   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.440159   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:59.440213   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:59.479810   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:41:59.479841   67936 cri.go:89] found id: ""
	I0815 18:41:59.479851   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:41:59.479907   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.484341   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:59.484394   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:59.521077   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:41:59.521104   67936 cri.go:89] found id: ""
	I0815 18:41:59.521114   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:41:59.521168   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.525075   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:59.525131   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:59.564058   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:41:59.564084   67936 cri.go:89] found id: ""
	I0815 18:41:59.564093   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:41:59.564150   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.568668   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:59.568734   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:59.604385   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:41:59.604406   67936 cri.go:89] found id: ""
	I0815 18:41:59.604416   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:41:59.604473   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.609023   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:59.609095   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:59.646289   67936 cri.go:89] found id: ""
	I0815 18:41:59.646334   67936 logs.go:276] 0 containers: []
	W0815 18:41:59.646346   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:59.646355   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:59.646422   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:59.681861   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:41:59.681889   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:41:59.681895   67936 cri.go:89] found id: ""
	I0815 18:41:59.681903   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:41:59.681951   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.686379   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.690328   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:59.690353   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:59.759302   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:41:59.759338   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:41:59.798249   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:41:59.798276   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:41:59.834097   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:41:59.834129   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:41:59.885365   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:41:59.885398   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:41:59.923013   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:59.923038   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:59.938162   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:59.938192   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:00.077340   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:00.077377   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:00.122292   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:00.122323   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:00.165209   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:00.165235   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:00.201278   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:00.201317   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:00.238007   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:00.238042   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:00.709997   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:00.710043   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:03.252351   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:42:03.268074   67936 api_server.go:72] duration metric: took 4m12.770065297s to wait for apiserver process to appear ...
	I0815 18:42:03.268104   67936 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:42:03.268159   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:42:03.268227   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:42:03.305890   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:03.305913   67936 cri.go:89] found id: ""
	I0815 18:42:03.305923   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:42:03.305981   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.309958   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:42:03.310019   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:42:03.344602   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:03.344630   67936 cri.go:89] found id: ""
	I0815 18:42:03.344639   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:42:03.344696   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.349261   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:42:03.349317   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:42:03.383892   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:03.383912   67936 cri.go:89] found id: ""
	I0815 18:42:03.383919   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:42:03.383968   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.388158   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:42:03.388219   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:42:03.423264   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:03.423293   67936 cri.go:89] found id: ""
	I0815 18:42:03.423303   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:42:03.423352   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.427436   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:42:03.427496   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:42:03.470792   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:03.470819   67936 cri.go:89] found id: ""
	I0815 18:42:03.470829   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:42:03.470890   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.475884   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:42:03.475956   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:42:03.513081   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:03.513103   67936 cri.go:89] found id: ""
	I0815 18:42:03.513110   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:42:03.513161   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.517913   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:42:03.517985   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:42:03.556149   67936 cri.go:89] found id: ""
	I0815 18:42:03.556180   67936 logs.go:276] 0 containers: []
	W0815 18:42:03.556191   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:42:03.556199   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:42:03.556257   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:42:03.595987   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:03.596015   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:03.596021   67936 cri.go:89] found id: ""
	I0815 18:42:03.596030   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:42:03.596112   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.600430   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.604422   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:42:03.604443   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:42:03.676629   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:42:03.676665   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:03.717487   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:03.717514   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:03.755606   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:42:03.755632   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:03.815152   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:03.815187   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:03.857853   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:03.857882   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:04.296939   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:42:04.296983   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:42:04.312346   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:42:04.312373   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:04.424132   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:04.424162   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:04.482298   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:04.482326   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:04.526805   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:42:04.526832   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:04.564842   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:42:04.564871   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:04.602297   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:04.602323   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.137972   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:42:07.143165   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 200:
	ok
	I0815 18:42:07.144155   67936 api_server.go:141] control plane version: v1.31.0
	I0815 18:42:07.144174   67936 api_server.go:131] duration metric: took 3.876063215s to wait for apiserver health ...
	I0815 18:42:07.144182   67936 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:42:07.144201   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:42:07.144243   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:42:07.185685   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:07.185709   67936 cri.go:89] found id: ""
	I0815 18:42:07.185717   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:42:07.185782   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.190086   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:42:07.190179   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:42:07.233020   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:07.233044   67936 cri.go:89] found id: ""
	I0815 18:42:07.233053   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:42:07.233114   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.237639   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:42:07.237698   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:42:07.277613   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:07.277642   67936 cri.go:89] found id: ""
	I0815 18:42:07.277652   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:42:07.277714   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.282273   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:42:07.282346   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:42:07.324972   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:07.325003   67936 cri.go:89] found id: ""
	I0815 18:42:07.325013   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:42:07.325071   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.329402   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:42:07.329470   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:42:07.369812   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:07.369840   67936 cri.go:89] found id: ""
	I0815 18:42:07.369849   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:42:07.369902   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.373993   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:42:07.374055   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:42:07.412036   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:07.412062   67936 cri.go:89] found id: ""
	I0815 18:42:07.412072   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:42:07.412145   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.416191   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:42:07.416263   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:42:07.457677   67936 cri.go:89] found id: ""
	I0815 18:42:07.457710   67936 logs.go:276] 0 containers: []
	W0815 18:42:07.457721   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:42:07.457728   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:42:07.457792   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:42:07.498173   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:07.498199   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.498204   67936 cri.go:89] found id: ""
	I0815 18:42:07.498210   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:42:07.498268   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.502704   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.506501   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:42:07.506520   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:07.542685   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:07.542713   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:07.584070   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:42:07.584097   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:07.634780   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:07.634812   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.669410   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:07.669436   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:08.062406   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:42:08.062454   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:42:08.077171   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:42:08.077209   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:08.186125   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:08.186158   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:08.229621   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:42:08.229655   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:08.266791   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:08.266818   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:08.314172   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:42:08.314197   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:42:08.388793   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:08.388837   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:08.438287   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:42:08.438317   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:10.990845   67936 system_pods.go:59] 8 kube-system pods found
	I0815 18:42:10.990875   67936 system_pods.go:61] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running
	I0815 18:42:10.990879   67936 system_pods.go:61] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running
	I0815 18:42:10.990883   67936 system_pods.go:61] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running
	I0815 18:42:10.990887   67936 system_pods.go:61] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running
	I0815 18:42:10.990890   67936 system_pods.go:61] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running
	I0815 18:42:10.990894   67936 system_pods.go:61] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running
	I0815 18:42:10.990900   67936 system_pods.go:61] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:42:10.990905   67936 system_pods.go:61] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running
	I0815 18:42:10.990913   67936 system_pods.go:74] duration metric: took 3.846725869s to wait for pod list to return data ...
	I0815 18:42:10.990919   67936 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:42:10.993933   67936 default_sa.go:45] found service account: "default"
	I0815 18:42:10.993958   67936 default_sa.go:55] duration metric: took 3.032805ms for default service account to be created ...
	I0815 18:42:10.993968   67936 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:42:10.998531   67936 system_pods.go:86] 8 kube-system pods found
	I0815 18:42:10.998553   67936 system_pods.go:89] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running
	I0815 18:42:10.998558   67936 system_pods.go:89] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running
	I0815 18:42:10.998562   67936 system_pods.go:89] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running
	I0815 18:42:10.998567   67936 system_pods.go:89] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running
	I0815 18:42:10.998570   67936 system_pods.go:89] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running
	I0815 18:42:10.998575   67936 system_pods.go:89] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running
	I0815 18:42:10.998582   67936 system_pods.go:89] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:42:10.998586   67936 system_pods.go:89] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running
	I0815 18:42:10.998592   67936 system_pods.go:126] duration metric: took 4.619003ms to wait for k8s-apps to be running ...
	I0815 18:42:10.998598   67936 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:42:10.998638   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:42:11.015236   67936 system_svc.go:56] duration metric: took 16.627802ms WaitForService to wait for kubelet
	I0815 18:42:11.015260   67936 kubeadm.go:582] duration metric: took 4m20.517256799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:42:11.015280   67936 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:42:11.018544   67936 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:42:11.018570   67936 node_conditions.go:123] node cpu capacity is 2
	I0815 18:42:11.018584   67936 node_conditions.go:105] duration metric: took 3.298753ms to run NodePressure ...
	I0815 18:42:11.018598   67936 start.go:241] waiting for startup goroutines ...
	I0815 18:42:11.018611   67936 start.go:246] waiting for cluster config update ...
	I0815 18:42:11.018626   67936 start.go:255] writing updated cluster config ...
	I0815 18:42:11.018907   67936 ssh_runner.go:195] Run: rm -f paused
	I0815 18:42:11.065371   67936 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:42:11.067513   67936 out.go:177] * Done! kubectl is now configured to use "no-preload-599042" cluster and "default" namespace by default
	I0815 18:42:12.186839   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:12.187041   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:42:32.187938   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:32.188123   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.189799   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:43:12.190012   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.190023   68713 kubeadm.go:310] 
	I0815 18:43:12.190075   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:43:12.190133   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:43:12.190148   68713 kubeadm.go:310] 
	I0815 18:43:12.190205   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:43:12.190265   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:43:12.190394   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:43:12.190403   68713 kubeadm.go:310] 
	I0815 18:43:12.190523   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:43:12.190571   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:43:12.190627   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:43:12.190636   68713 kubeadm.go:310] 
	I0815 18:43:12.190772   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:43:12.190928   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:43:12.190950   68713 kubeadm.go:310] 
	I0815 18:43:12.191108   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:43:12.191218   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:43:12.191344   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:43:12.191478   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:43:12.191504   68713 kubeadm.go:310] 
	I0815 18:43:12.192283   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:43:12.192421   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:43:12.192523   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0815 18:43:12.192655   68713 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 18:43:12.192699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:43:12.658571   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:43:12.675797   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:43:12.687340   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:43:12.687370   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:43:12.687422   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:43:12.698401   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:43:12.698464   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:43:12.709632   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:43:12.720330   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:43:12.720386   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:43:12.731593   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.742122   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:43:12.742185   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.753042   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:43:12.762799   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:43:12.762855   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:43:12.772788   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:43:12.987927   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:45:08.956975   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:45:08.957069   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 18:45:08.958834   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:45:08.958904   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:45:08.958993   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:45:08.959133   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:45:08.959280   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:45:08.959376   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:45:08.961205   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:45:08.961294   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:45:08.961352   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:45:08.961424   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:45:08.961475   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:45:08.961536   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:45:08.961581   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:45:08.961637   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:45:08.961689   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:45:08.961795   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:45:08.961910   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:45:08.961971   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:45:08.962030   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:45:08.962078   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:45:08.962127   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:45:08.962214   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:45:08.962316   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:45:08.962448   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:45:08.962565   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:45:08.962626   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:45:08.962724   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:45:08.964403   68713 out.go:235]   - Booting up control plane ...
	I0815 18:45:08.964526   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:45:08.964631   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:45:08.964736   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:45:08.964866   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:45:08.965043   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:45:08.965121   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:45:08.965225   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965418   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965508   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965703   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965766   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965919   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965981   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966140   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966200   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966381   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966389   68713 kubeadm.go:310] 
	I0815 18:45:08.966438   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:45:08.966473   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:45:08.966481   68713 kubeadm.go:310] 
	I0815 18:45:08.966533   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:45:08.966580   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:45:08.966711   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:45:08.966718   68713 kubeadm.go:310] 
	I0815 18:45:08.966844   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:45:08.966900   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:45:08.966948   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:45:08.966958   68713 kubeadm.go:310] 
	I0815 18:45:08.967082   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:45:08.967201   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:45:08.967214   68713 kubeadm.go:310] 
	I0815 18:45:08.967341   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:45:08.967450   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:45:08.967548   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:45:08.967646   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:45:08.967678   68713 kubeadm.go:310] 
	I0815 18:45:08.967716   68713 kubeadm.go:394] duration metric: took 7m56.388213745s to StartCluster
	I0815 18:45:08.967768   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:45:08.967834   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:45:09.013913   68713 cri.go:89] found id: ""
	I0815 18:45:09.013943   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.013954   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:45:09.013961   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:45:09.014030   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:45:09.051370   68713 cri.go:89] found id: ""
	I0815 18:45:09.051395   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.051403   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:45:09.051409   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:45:09.051477   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:45:09.086615   68713 cri.go:89] found id: ""
	I0815 18:45:09.086646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.086653   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:45:09.086659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:45:09.086708   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:45:09.122335   68713 cri.go:89] found id: ""
	I0815 18:45:09.122370   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.122381   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:45:09.122389   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:45:09.122453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:45:09.163207   68713 cri.go:89] found id: ""
	I0815 18:45:09.163232   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.163241   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:45:09.163247   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:45:09.163308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:45:09.199396   68713 cri.go:89] found id: ""
	I0815 18:45:09.199426   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.199437   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:45:09.199444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:45:09.199504   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:45:09.235073   68713 cri.go:89] found id: ""
	I0815 18:45:09.235101   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.235112   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:45:09.235120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:45:09.235180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:45:09.271614   68713 cri.go:89] found id: ""
	I0815 18:45:09.271646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.271659   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:45:09.271671   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:45:09.271686   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:45:09.372192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:45:09.372214   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:45:09.372231   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:45:09.496743   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:45:09.496780   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:45:09.540434   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:45:09.540471   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:45:09.595546   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:45:09.595584   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 18:45:09.609831   68713 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 18:45:09.609885   68713 out.go:270] * 
	W0815 18:45:09.609942   68713 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.609956   68713 out.go:270] * 
	W0815 18:45:09.610794   68713 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 18:45:09.614213   68713 out.go:201] 
	W0815 18:45:09.615379   68713 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.615420   68713 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 18:45:09.615437   68713 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 18:45:09.616840   68713 out.go:201] 
	
	
	==> CRI-O <==
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.153788126Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747827153758900,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=313cd27b-3111-49ea-b553-41ff350f9890 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.154450312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9651e14a-17a3-4179-95a0-47e2c3997c41 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.154533363Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9651e14a-17a3-4179-95a0-47e2c3997c41 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.154810532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747047326270992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905a73b877f297cda91cd2774858f2d95a9cf203fde6aa1e7e30eb8742f3bffc,PodSandboxId:3117121dfcf11740eeda723004bd1d01d3ba4aee940fa602d8ddf676c0a5713a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747027126183973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c26ca004-1d45-4ab6-ae7d-1e32614dccc0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99,PodSandboxId:bb96ed99d7d75ac456a668c56a179414052528008053df318d478956f082370f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747023962100509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-brc2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16add35-fdfd-4a39-8814-ec74318ae245,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad,PodSandboxId:7ca470c14cdbad4876f50ee655027b1b82b4b3a660a62a956146fce2af41dc7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747016539724119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bnxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3915f67-8
99a-40b9-bb2a-adef461b6320,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747016547520692,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-
203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3,PodSandboxId:c9d2271313634faa933ba3161e540740f18ae3acc12e7533c5bf81b3027daf77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747012866133608,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ec8dccc8d89d60ba8baa605ce2b0f7,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c,PodSandboxId:32831d409ffdb810f68e1d42e019909ed645f178a81ea873ae3b2f0077c65024,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747012825798717,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db054f45180592a2196fa4f7877
4bd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2,PodSandboxId:bddd685825c2e5da33fa039e58d3a24436a433bd2bf248f647748b337eb46ee2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747012801216445,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f228bce39c4a51992ab3fab5f6435
565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428,PodSandboxId:1ebe7207156d7e5166e9329af404eb04e485b8fd0237e7e7918aed03e6b71d16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747012795472193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bb35d1563e8e927ca812fbe5d87d1
8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9651e14a-17a3-4179-95a0-47e2c3997c41 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.197968696Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2b2a38c-7570-4e15-b383-d606237c011b name=/runtime.v1.RuntimeService/Version
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.198070633Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2b2a38c-7570-4e15-b383-d606237c011b name=/runtime.v1.RuntimeService/Version
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.199689758Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=61867475-6eff-46c7-9d23-d1d3e35017e0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.200295649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747827200269799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=61867475-6eff-46c7-9d23-d1d3e35017e0 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.200900317Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f106710-10dd-4be2-8ff1-784ae609b50b name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.200995710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f106710-10dd-4be2-8ff1-784ae609b50b name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.201297791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747047326270992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905a73b877f297cda91cd2774858f2d95a9cf203fde6aa1e7e30eb8742f3bffc,PodSandboxId:3117121dfcf11740eeda723004bd1d01d3ba4aee940fa602d8ddf676c0a5713a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747027126183973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c26ca004-1d45-4ab6-ae7d-1e32614dccc0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99,PodSandboxId:bb96ed99d7d75ac456a668c56a179414052528008053df318d478956f082370f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747023962100509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-brc2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16add35-fdfd-4a39-8814-ec74318ae245,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad,PodSandboxId:7ca470c14cdbad4876f50ee655027b1b82b4b3a660a62a956146fce2af41dc7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747016539724119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bnxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3915f67-8
99a-40b9-bb2a-adef461b6320,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747016547520692,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-
203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3,PodSandboxId:c9d2271313634faa933ba3161e540740f18ae3acc12e7533c5bf81b3027daf77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747012866133608,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ec8dccc8d89d60ba8baa605ce2b0f7,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c,PodSandboxId:32831d409ffdb810f68e1d42e019909ed645f178a81ea873ae3b2f0077c65024,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747012825798717,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db054f45180592a2196fa4f7877
4bd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2,PodSandboxId:bddd685825c2e5da33fa039e58d3a24436a433bd2bf248f647748b337eb46ee2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747012801216445,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f228bce39c4a51992ab3fab5f6435
565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428,PodSandboxId:1ebe7207156d7e5166e9329af404eb04e485b8fd0237e7e7918aed03e6b71d16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747012795472193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bb35d1563e8e927ca812fbe5d87d1
8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f106710-10dd-4be2-8ff1-784ae609b50b name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.250653627Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=24b1cca9-7f91-4690-a2f6-9d5e3b8a933a name=/runtime.v1.RuntimeService/Version
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.250745039Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=24b1cca9-7f91-4690-a2f6-9d5e3b8a933a name=/runtime.v1.RuntimeService/Version
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.251837840Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6f5d4fb-48ce-4557-84e3-e4af86f51159 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.252369895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747827252347295,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6f5d4fb-48ce-4557-84e3-e4af86f51159 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.253253778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=845fd0ef-0a1c-4a60-86c8-f8a2bcc49cdd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.253333031Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=845fd0ef-0a1c-4a60-86c8-f8a2bcc49cdd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.253552114Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747047326270992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905a73b877f297cda91cd2774858f2d95a9cf203fde6aa1e7e30eb8742f3bffc,PodSandboxId:3117121dfcf11740eeda723004bd1d01d3ba4aee940fa602d8ddf676c0a5713a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747027126183973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c26ca004-1d45-4ab6-ae7d-1e32614dccc0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99,PodSandboxId:bb96ed99d7d75ac456a668c56a179414052528008053df318d478956f082370f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747023962100509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-brc2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16add35-fdfd-4a39-8814-ec74318ae245,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad,PodSandboxId:7ca470c14cdbad4876f50ee655027b1b82b4b3a660a62a956146fce2af41dc7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747016539724119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bnxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3915f67-8
99a-40b9-bb2a-adef461b6320,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747016547520692,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-
203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3,PodSandboxId:c9d2271313634faa933ba3161e540740f18ae3acc12e7533c5bf81b3027daf77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747012866133608,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ec8dccc8d89d60ba8baa605ce2b0f7,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c,PodSandboxId:32831d409ffdb810f68e1d42e019909ed645f178a81ea873ae3b2f0077c65024,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747012825798717,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db054f45180592a2196fa4f7877
4bd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2,PodSandboxId:bddd685825c2e5da33fa039e58d3a24436a433bd2bf248f647748b337eb46ee2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747012801216445,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f228bce39c4a51992ab3fab5f6435
565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428,PodSandboxId:1ebe7207156d7e5166e9329af404eb04e485b8fd0237e7e7918aed03e6b71d16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747012795472193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bb35d1563e8e927ca812fbe5d87d1
8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=845fd0ef-0a1c-4a60-86c8-f8a2bcc49cdd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.288568709Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35070b39-a3b7-4bf6-8bff-e472c29be457 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.288708158Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35070b39-a3b7-4bf6-8bff-e472c29be457 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.290234003Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eadd4b2f-b459-48e2-a877-1ba4247dc758 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.290631202Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747827290608429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eadd4b2f-b459-48e2-a877-1ba4247dc758 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.291340620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9cc220d1-3abf-42f1-84d7-bcb1329c45df name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.291391145Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9cc220d1-3abf-42f1-84d7-bcb1329c45df name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:27 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:50:27.291610985Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747047326270992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905a73b877f297cda91cd2774858f2d95a9cf203fde6aa1e7e30eb8742f3bffc,PodSandboxId:3117121dfcf11740eeda723004bd1d01d3ba4aee940fa602d8ddf676c0a5713a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747027126183973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c26ca004-1d45-4ab6-ae7d-1e32614dccc0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99,PodSandboxId:bb96ed99d7d75ac456a668c56a179414052528008053df318d478956f082370f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747023962100509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-brc2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16add35-fdfd-4a39-8814-ec74318ae245,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad,PodSandboxId:7ca470c14cdbad4876f50ee655027b1b82b4b3a660a62a956146fce2af41dc7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747016539724119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bnxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3915f67-8
99a-40b9-bb2a-adef461b6320,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747016547520692,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-
203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3,PodSandboxId:c9d2271313634faa933ba3161e540740f18ae3acc12e7533c5bf81b3027daf77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747012866133608,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ec8dccc8d89d60ba8baa605ce2b0f7,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c,PodSandboxId:32831d409ffdb810f68e1d42e019909ed645f178a81ea873ae3b2f0077c65024,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747012825798717,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db054f45180592a2196fa4f7877
4bd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2,PodSandboxId:bddd685825c2e5da33fa039e58d3a24436a433bd2bf248f647748b337eb46ee2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747012801216445,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f228bce39c4a51992ab3fab5f6435
565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428,PodSandboxId:1ebe7207156d7e5166e9329af404eb04e485b8fd0237e7e7918aed03e6b71d16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747012795472193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bb35d1563e8e927ca812fbe5d87d1
8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9cc220d1-3abf-42f1-84d7-bcb1329c45df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5ba0de31ac4d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   9533da6294cd4       storage-provisioner
	905a73b877f29       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   3117121dfcf11       busybox
	4002a75569d01       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   bb96ed99d7d75       coredns-6f6b679f8f-brc2r
	de97b6534ff12       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   9533da6294cd4       storage-provisioner
	78aa18ab3ca1d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   7ca470c14cdba       kube-proxy-bnxv7
	7c7302ebd91e3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   c9d2271313634       etcd-default-k8s-diff-port-423062
	b5437880e3b54       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   32831d409ffdb       kube-controller-manager-default-k8s-diff-port-423062
	4ff0eaf196e91       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   bddd685825c2e       kube-scheduler-default-k8s-diff-port-423062
	a728cb5e05d1d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   1ebe7207156d7       kube-apiserver-default-k8s-diff-port-423062
	
	
	==> coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53751 - 15090 "HINFO IN 4697154533671768996.2502729668727686100. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016745811s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-423062
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-423062
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=default-k8s-diff-port-423062
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T18_29_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 18:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-423062
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:50:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 18:47:38 +0000   Thu, 15 Aug 2024 18:29:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 18:47:38 +0000   Thu, 15 Aug 2024 18:29:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 18:47:38 +0000   Thu, 15 Aug 2024 18:29:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 18:47:38 +0000   Thu, 15 Aug 2024 18:37:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.7
	  Hostname:    default-k8s-diff-port-423062
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1caebc083b84591add60167fa27e454
	  System UUID:                f1caebc0-83b8-4591-add6-0167fa27e454
	  Boot ID:                    d3a93374-75d3-4871-a6e0-5c63fd93ab57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-6f6b679f8f-brc2r                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     20m
	  kube-system                 etcd-default-k8s-diff-port-423062                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-423062             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-423062    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-bnxv7                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-423062             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-8mppk                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-423062 status is now: NodeReady
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-423062 event: Registered Node default-k8s-diff-port-423062 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-423062 event: Registered Node default-k8s-diff-port-423062 in Controller
	
	
	==> dmesg <==
	[Aug15 18:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051782] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039090] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.882841] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.393540] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.577280] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.050377] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.064774] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072686] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.218480] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.141172] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.296997] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.233668] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.061403] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.105962] systemd-fstab-generator[926]: Ignoring "noauto" option for root device
	[  +4.587601] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.458326] systemd-fstab-generator[1556]: Ignoring "noauto" option for root device
	[Aug15 18:37] kauditd_printk_skb: 64 callbacks suppressed
	[ +25.156722] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] <==
	{"level":"info","ts":"2024-08-15T18:36:54.424030Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.7:2379"}
	{"level":"info","ts":"2024-08-15T18:37:11.635049Z","caller":"traceutil/trace.go:171","msg":"trace[1560610023] transaction","detail":"{read_only:false; response_revision:619; number_of_response:1; }","duration":"299.92272ms","start":"2024-08-15T18:37:11.335105Z","end":"2024-08-15T18:37:11.635028Z","steps":["trace[1560610023] 'process raft request'  (duration: 299.574646ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:37:11.635397Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:37:11.335085Z","time spent":"300.022927ms","remote":"127.0.0.1:50638","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6830,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-423062\" mod_revision:489 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-423062\" value_size:6743 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-423062\" > >"}
	{"level":"warn","ts":"2024-08-15T18:37:12.147825Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.83435ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13993406796452276917 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-423062\" mod_revision:493 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-423062\" value_size:7000 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-423062\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-15T18:37:12.148076Z","caller":"traceutil/trace.go:171","msg":"trace[883065608] linearizableReadLoop","detail":"{readStateIndex:660; appliedIndex:659; }","duration":"201.788462ms","start":"2024-08-15T18:37:11.946277Z","end":"2024-08-15T18:37:12.148065Z","steps":["trace[883065608] 'read index received'  (duration: 72.279826ms)","trace[883065608] 'applied index is now lower than readState.Index'  (duration: 129.507459ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T18:37:12.148788Z","caller":"traceutil/trace.go:171","msg":"trace[857660725] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"496.292288ms","start":"2024-08-15T18:37:11.652438Z","end":"2024-08-15T18:37:12.148730Z","steps":["trace[857660725] 'process raft request'  (duration: 366.173801ms)","trace[857660725] 'compare'  (duration: 128.745074ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:37:12.148970Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:37:11.652426Z","time spent":"496.482303ms","remote":"127.0.0.1:50638","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7078,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-423062\" mod_revision:493 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-423062\" value_size:7000 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-423062\" > >"}
	{"level":"warn","ts":"2024-08-15T18:37:12.384119Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.590364ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13993406796452276919 > lease_revoke:<id:4232915754132926>","response":"size:27"}
	{"level":"info","ts":"2024-08-15T18:37:12.384259Z","caller":"traceutil/trace.go:171","msg":"trace[2014169014] linearizableReadLoop","detail":"{readStateIndex:661; appliedIndex:660; }","duration":"223.844759ms","start":"2024-08-15T18:37:12.160402Z","end":"2024-08-15T18:37:12.384247Z","steps":["trace[2014169014] 'read index received'  (duration: 20.762µs)","trace[2014169014] 'applied index is now lower than readState.Index'  (duration: 223.823003ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:37:12.384617Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.195824ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-423062\" ","response":"range_response_count:1 size:6845"}
	{"level":"info","ts":"2024-08-15T18:37:12.384714Z","caller":"traceutil/trace.go:171","msg":"trace[1862908041] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-423062; range_end:; response_count:1; response_revision:620; }","duration":"224.302626ms","start":"2024-08-15T18:37:12.160398Z","end":"2024-08-15T18:37:12.384701Z","steps":["trace[1862908041] 'agreement among raft nodes before linearized reading'  (duration: 224.013398ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:37:12.385064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.055108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-423062\" ","response":"range_response_count:1 size:5528"}
	{"level":"info","ts":"2024-08-15T18:37:12.385485Z","caller":"traceutil/trace.go:171","msg":"trace[1899756769] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-423062; range_end:; response_count:1; response_revision:620; }","duration":"224.478339ms","start":"2024-08-15T18:37:12.160997Z","end":"2024-08-15T18:37:12.385476Z","steps":["trace[1899756769] 'agreement among raft nodes before linearized reading'  (duration: 223.996415ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:37:12.385377Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.950271ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:37:12.387238Z","caller":"traceutil/trace.go:171","msg":"trace[1608492420] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:620; }","duration":"146.811293ms","start":"2024-08-15T18:37:12.240414Z","end":"2024-08-15T18:37:12.387226Z","steps":["trace[1608492420] 'agreement among raft nodes before linearized reading'  (duration: 144.938871ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:37:12.802411Z","caller":"traceutil/trace.go:171","msg":"trace[1236834772] linearizableReadLoop","detail":"{readStateIndex:662; appliedIndex:661; }","duration":"191.671781ms","start":"2024-08-15T18:37:12.610724Z","end":"2024-08-15T18:37:12.802395Z","steps":["trace[1236834772] 'read index received'  (duration: 191.43646ms)","trace[1236834772] 'applied index is now lower than readState.Index'  (duration: 234.747µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:37:12.802628Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.925977ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-423062\" ","response":"range_response_count:1 size:7093"}
	{"level":"info","ts":"2024-08-15T18:37:12.802662Z","caller":"traceutil/trace.go:171","msg":"trace[99694009] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"408.541678ms","start":"2024-08-15T18:37:12.394105Z","end":"2024-08-15T18:37:12.802647Z","steps":["trace[99694009] 'process raft request'  (duration: 408.150891ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:37:12.802814Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:37:12.394087Z","time spent":"408.670808ms","remote":"127.0.0.1:50638","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6620,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-423062\" mod_revision:619 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-423062\" value_size:6533 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-default-k8s-diff-port-423062\" > >"}
	{"level":"info","ts":"2024-08-15T18:37:12.802681Z","caller":"traceutil/trace.go:171","msg":"trace[1964374325] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-423062; range_end:; response_count:1; response_revision:621; }","duration":"191.993377ms","start":"2024-08-15T18:37:12.610678Z","end":"2024-08-15T18:37:12.802671Z","steps":["trace[1964374325] 'agreement among raft nodes before linearized reading'  (duration: 191.85769ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:37:13.075325Z","caller":"traceutil/trace.go:171","msg":"trace[1614579996] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"259.713703ms","start":"2024-08-15T18:37:12.815597Z","end":"2024-08-15T18:37:13.075310Z","steps":["trace[1614579996] 'process raft request'  (duration: 254.757193ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:37:46.962387Z","caller":"traceutil/trace.go:171","msg":"trace[2084325176] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"103.878082ms","start":"2024-08-15T18:37:46.858495Z","end":"2024-08-15T18:37:46.962373Z","steps":["trace[2084325176] 'process raft request'  (duration: 103.430216ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:46:54.454432Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":866}
	{"level":"info","ts":"2024-08-15T18:46:54.466809Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":866,"took":"11.534828ms","hash":1613429279,"current-db-size-bytes":2633728,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":2633728,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2024-08-15T18:46:54.466979Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1613429279,"revision":866,"compact-revision":-1}
	
	
	==> kernel <==
	 18:50:27 up 13 min,  0 users,  load average: 0.02, 0.11, 0.09
	Linux default-k8s-diff-port-423062 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] <==
	W0815 18:46:56.665130       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:46:56.665235       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 18:46:56.666157       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:46:56.667352       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 18:47:56.666946       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:47:56.667098       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0815 18:47:56.668300       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:47:56.668396       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 18:47:56.668468       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:47:56.669569       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 18:49:56.669687       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:49:56.669815       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0815 18:49:56.669882       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:49:56.669897       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0815 18:49:56.671032       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:49:56.671080       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] <==
	E0815 18:44:59.414604       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:44:59.920213       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:45:29.421157       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:45:29.929393       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:45:59.426605       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:45:59.938754       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:46:29.433466       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:46:29.946148       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:46:59.440008       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:46:59.953526       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:47:29.445579       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:47:29.961640       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:47:38.079054       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-423062"
	E0815 18:47:59.451725       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:47:59.969200       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:48:09.169943       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="358.155µs"
	I0815 18:48:21.170506       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="133.594µs"
	E0815 18:48:29.458570       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:48:29.979461       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:48:59.464824       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:48:59.987288       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:49:29.470781       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:49:29.995589       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:49:59.476799       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:50:00.003790       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 18:36:56.837357       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 18:36:56.857242       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.7"]
	E0815 18:36:56.857529       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 18:36:56.906030       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 18:36:56.906090       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 18:36:56.906126       1 server_linux.go:169] "Using iptables Proxier"
	I0815 18:36:56.912002       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 18:36:56.912282       1 server.go:483] "Version info" version="v1.31.0"
	I0815 18:36:56.912305       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:36:56.916483       1 config.go:197] "Starting service config controller"
	I0815 18:36:56.916514       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 18:36:56.916535       1 config.go:104] "Starting endpoint slice config controller"
	I0815 18:36:56.916539       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 18:36:56.917447       1 config.go:326] "Starting node config controller"
	I0815 18:36:56.917478       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 18:36:57.017102       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 18:36:57.017211       1 shared_informer.go:320] Caches are synced for service config
	I0815 18:36:57.017971       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] <==
	W0815 18:36:55.770017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 18:36:55.770103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.770328       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 18:36:55.770428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.770630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0815 18:36:55.772743       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 18:36:55.772788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.772890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 18:36:55.772922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 18:36:55.773051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0815 18:36:55.773077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773111       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 18:36:55.773488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 18:36:55.773537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773243       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 18:36:55.773555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773386       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 18:36:55.773569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 18:36:55.773667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 18:36:55.773896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 18:36:55.820130       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 18:49:12 default-k8s-diff-port-423062 kubelet[933]: E0815 18:49:12.315026     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747752314635980,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:49:22 default-k8s-diff-port-423062 kubelet[933]: E0815 18:49:22.317034     933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747762316490542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:49:22 default-k8s-diff-port-423062 kubelet[933]: E0815 18:49:22.317541     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747762316490542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:49:25 default-k8s-diff-port-423062 kubelet[933]: E0815 18:49:25.153694     933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8mppk" podUID="27b1cd42-fec2-44d2-95f4-207d5aedb1db"
	Aug 15 18:49:32 default-k8s-diff-port-423062 kubelet[933]: E0815 18:49:32.319421     933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747772318989382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:49:32 default-k8s-diff-port-423062 kubelet[933]: E0815 18:49:32.319748     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747772318989382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:49:38 default-k8s-diff-port-423062 kubelet[933]: E0815 18:49:38.153955     933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8mppk" podUID="27b1cd42-fec2-44d2-95f4-207d5aedb1db"
	Aug 15 18:49:42 default-k8s-diff-port-423062 kubelet[933]: E0815 18:49:42.322337     933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747782321797693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:49:42 default-k8s-diff-port-423062 kubelet[933]: E0815 18:49:42.322622     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747782321797693,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:49:52 default-k8s-diff-port-423062 kubelet[933]: E0815 18:49:52.154553     933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8mppk" podUID="27b1cd42-fec2-44d2-95f4-207d5aedb1db"
	Aug 15 18:49:52 default-k8s-diff-port-423062 kubelet[933]: E0815 18:49:52.167632     933 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 18:49:52 default-k8s-diff-port-423062 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 18:49:52 default-k8s-diff-port-423062 kubelet[933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 18:49:52 default-k8s-diff-port-423062 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 18:49:52 default-k8s-diff-port-423062 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 18:49:52 default-k8s-diff-port-423062 kubelet[933]: E0815 18:49:52.324969     933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747792324605679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:49:52 default-k8s-diff-port-423062 kubelet[933]: E0815 18:49:52.325017     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747792324605679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:02 default-k8s-diff-port-423062 kubelet[933]: E0815 18:50:02.326776     933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747802326429768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:02 default-k8s-diff-port-423062 kubelet[933]: E0815 18:50:02.327311     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747802326429768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:07 default-k8s-diff-port-423062 kubelet[933]: E0815 18:50:07.153623     933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8mppk" podUID="27b1cd42-fec2-44d2-95f4-207d5aedb1db"
	Aug 15 18:50:12 default-k8s-diff-port-423062 kubelet[933]: E0815 18:50:12.328926     933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747812328313888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:12 default-k8s-diff-port-423062 kubelet[933]: E0815 18:50:12.329017     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747812328313888,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:21 default-k8s-diff-port-423062 kubelet[933]: E0815 18:50:21.154078     933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8mppk" podUID="27b1cd42-fec2-44d2-95f4-207d5aedb1db"
	Aug 15 18:50:22 default-k8s-diff-port-423062 kubelet[933]: E0815 18:50:22.330473     933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747822330160626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:22 default-k8s-diff-port-423062 kubelet[933]: E0815 18:50:22.330821     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747822330160626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] <==
	I0815 18:37:27.427133       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 18:37:27.439588       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 18:37:27.439746       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 18:37:44.843976       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 18:37:44.844132       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-423062_213dbfc4-6ef0-4e02-8fb1-d789b64f197b!
	I0815 18:37:44.844978       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6dbad7f-8bb0-484b-9814-24ac362644b1", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-423062_213dbfc4-6ef0-4e02-8fb1-d789b64f197b became leader
	I0815 18:37:44.945011       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-423062_213dbfc4-6ef0-4e02-8fb1-d789b64f197b!
	
	
	==> storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] <==
	I0815 18:36:56.683595       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0815 18:37:26.690445       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-423062 -n default-k8s-diff-port-423062
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-423062 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-8mppk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-423062 describe pod metrics-server-6867b74b74-8mppk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-423062 describe pod metrics-server-6867b74b74-8mppk: exit status 1 (58.944838ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-8mppk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-423062 describe pod metrics-server-6867b74b74-8mppk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-555028 -n embed-certs-555028
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-15 18:50:38.182640063 +0000 UTC m=+6329.160745237
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-555028 -n embed-certs-555028
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-555028 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-555028 logs -n 25: (2.052377243s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-729203                           | kubernetes-upgrade-729203    | jenkins | v1.33.1 | 15 Aug 24 18:26 UTC | 15 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-498665                              | stopped-upgrade-498665       | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:27 UTC |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-729203                           | kubernetes-upgrade-729203    | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:27 UTC |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-003860                              | cert-expiration-003860       | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-003860                              | cert-expiration-003860       | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-698209 | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | disable-driver-mounts-698209                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:29 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-599042             | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-555028            | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-423062  | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-278865        | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-599042                  | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC | 15 Aug 24 18:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-555028                 | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-423062       | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-278865             | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:32:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:32:52.788403   68713 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:32:52.788704   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788715   68713 out.go:358] Setting ErrFile to fd 2...
	I0815 18:32:52.788719   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788916   68713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:32:52.789431   68713 out.go:352] Setting JSON to false
	I0815 18:32:52.790297   68713 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8119,"bootTime":1723738654,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:32:52.790355   68713 start.go:139] virtualization: kvm guest
	I0815 18:32:52.792478   68713 out.go:177] * [old-k8s-version-278865] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:32:52.793818   68713 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:32:52.793864   68713 notify.go:220] Checking for updates...
	I0815 18:32:52.796618   68713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:32:52.797914   68713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:32:52.799054   68713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:32:52.800337   68713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:32:52.801448   68713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:32:52.803087   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:32:52.803465   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.803521   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.819013   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0815 18:32:52.819447   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.819966   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.819985   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.820284   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.820482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.822582   68713 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 18:32:52.824024   68713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:32:52.824380   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.824425   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.839486   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0815 18:32:52.839905   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.840345   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.840367   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.840730   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.840904   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.876811   68713 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:32:52.878075   68713 start.go:297] selected driver: kvm2
	I0815 18:32:52.878098   68713 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.878240   68713 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:32:52.878920   68713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.879001   68713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:32:52.894158   68713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:32:52.894895   68713 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:32:52.894953   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:32:52.894969   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:32:52.895020   68713 start.go:340] cluster config:
	{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.895203   68713 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.897304   68713 out.go:177] * Starting "old-k8s-version-278865" primary control-plane node in "old-k8s-version-278865" cluster
	I0815 18:32:51.348753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:32:52.898737   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:32:52.898785   68713 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:32:52.898795   68713 cache.go:56] Caching tarball of preloaded images
	I0815 18:32:52.898861   68713 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:32:52.898871   68713 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0815 18:32:52.898962   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:32:52.899159   68713 start.go:360] acquireMachinesLock for old-k8s-version-278865: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:32:57.424754   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:00.496786   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:06.576768   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:09.648759   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:15.728760   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:18.800783   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:24.880725   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:27.952781   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:34.032763   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:37.104737   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:43.184796   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:46.260701   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:52.336771   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:55.408745   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:01.488742   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:04.560759   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:10.640771   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:13.712753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:19.792795   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:22.864720   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:28.944769   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:32.016745   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:38.096783   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:41.168739   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:47.248802   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:50.320778   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:56.400717   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:59.472780   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:05.552762   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:08.624707   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:14.704753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:17.776748   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:23.856782   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:26.932742   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:33.008795   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:36.080807   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:42.160767   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:45.232800   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:51.312780   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:54.384719   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:00.464740   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:03.536736   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:06.540805   68248 start.go:364] duration metric: took 4m1.610543673s to acquireMachinesLock for "embed-certs-555028"
	I0815 18:36:06.540869   68248 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:06.540881   68248 fix.go:54] fixHost starting: 
	I0815 18:36:06.541241   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:06.541272   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:06.556680   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0815 18:36:06.557105   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:06.557518   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:36:06.557540   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:06.557831   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:06.558059   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:06.558202   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:36:06.559702   68248 fix.go:112] recreateIfNeeded on embed-certs-555028: state=Stopped err=<nil>
	I0815 18:36:06.559724   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	W0815 18:36:06.559877   68248 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:06.561410   68248 out.go:177] * Restarting existing kvm2 VM for "embed-certs-555028" ...
	I0815 18:36:06.538474   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:06.538508   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:36:06.538805   67936 buildroot.go:166] provisioning hostname "no-preload-599042"
	I0815 18:36:06.538831   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:36:06.539016   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:36:06.540664   67936 machine.go:96] duration metric: took 4m37.431349663s to provisionDockerMachine
	I0815 18:36:06.540702   67936 fix.go:56] duration metric: took 4m37.452150687s for fixHost
	I0815 18:36:06.540709   67936 start.go:83] releasing machines lock for "no-preload-599042", held for 4m37.452172562s
	W0815 18:36:06.540732   67936 start.go:714] error starting host: provision: host is not running
	W0815 18:36:06.540801   67936 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0815 18:36:06.540809   67936 start.go:729] Will try again in 5 seconds ...
	I0815 18:36:06.562384   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Start
	I0815 18:36:06.562537   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring networks are active...
	I0815 18:36:06.563252   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring network default is active
	I0815 18:36:06.563554   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring network mk-embed-certs-555028 is active
	I0815 18:36:06.563908   68248 main.go:141] libmachine: (embed-certs-555028) Getting domain xml...
	I0815 18:36:06.564614   68248 main.go:141] libmachine: (embed-certs-555028) Creating domain...
	I0815 18:36:07.763793   68248 main.go:141] libmachine: (embed-certs-555028) Waiting to get IP...
	I0815 18:36:07.764733   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:07.765099   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:07.765200   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:07.765085   69393 retry.go:31] will retry after 206.840107ms: waiting for machine to come up
	I0815 18:36:07.973596   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:07.974069   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:07.974093   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:07.974019   69393 retry.go:31] will retry after 319.002956ms: waiting for machine to come up
	I0815 18:36:08.294670   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:08.295125   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:08.295154   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:08.295073   69393 retry.go:31] will retry after 425.99373ms: waiting for machine to come up
	I0815 18:36:08.722549   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:08.722954   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:08.722985   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:08.722903   69393 retry.go:31] will retry after 428.077891ms: waiting for machine to come up
	I0815 18:36:09.152674   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:09.153155   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:09.153187   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:09.153108   69393 retry.go:31] will retry after 476.041155ms: waiting for machine to come up
	I0815 18:36:09.630963   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:09.631456   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:09.631485   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:09.631395   69393 retry.go:31] will retry after 751.179582ms: waiting for machine to come up
	I0815 18:36:11.542364   67936 start.go:360] acquireMachinesLock for no-preload-599042: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:36:10.384466   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:10.384888   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:10.384916   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:10.384842   69393 retry.go:31] will retry after 1.028202731s: waiting for machine to come up
	I0815 18:36:11.414905   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:11.415343   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:11.415373   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:11.415283   69393 retry.go:31] will retry after 1.129105535s: waiting for machine to come up
	I0815 18:36:12.545941   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:12.546365   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:12.546387   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:12.546320   69393 retry.go:31] will retry after 1.734323812s: waiting for machine to come up
	I0815 18:36:14.283247   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:14.283622   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:14.283653   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:14.283569   69393 retry.go:31] will retry after 1.657173562s: waiting for machine to come up
	I0815 18:36:15.943329   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:15.943730   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:15.943760   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:15.943669   69393 retry.go:31] will retry after 2.349664822s: waiting for machine to come up
	I0815 18:36:18.295797   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:18.296330   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:18.296363   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:18.296264   69393 retry.go:31] will retry after 2.889119284s: waiting for machine to come up
	I0815 18:36:21.186597   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:21.186983   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:21.187004   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:21.186945   69393 retry.go:31] will retry after 2.79101595s: waiting for machine to come up
	I0815 18:36:23.981271   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.981732   68248 main.go:141] libmachine: (embed-certs-555028) Found IP for machine: 192.168.50.234
	I0815 18:36:23.981761   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has current primary IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.981770   68248 main.go:141] libmachine: (embed-certs-555028) Reserving static IP address...
	I0815 18:36:23.982166   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "embed-certs-555028", mac: "52:54:00:5c:59:7b", ip: "192.168.50.234"} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:23.982189   68248 main.go:141] libmachine: (embed-certs-555028) DBG | skip adding static IP to network mk-embed-certs-555028 - found existing host DHCP lease matching {name: "embed-certs-555028", mac: "52:54:00:5c:59:7b", ip: "192.168.50.234"}
	I0815 18:36:23.982200   68248 main.go:141] libmachine: (embed-certs-555028) Reserved static IP address: 192.168.50.234
	I0815 18:36:23.982210   68248 main.go:141] libmachine: (embed-certs-555028) Waiting for SSH to be available...
	I0815 18:36:23.982220   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Getting to WaitForSSH function...
	I0815 18:36:23.984253   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.984578   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:23.984601   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.984696   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Using SSH client type: external
	I0815 18:36:23.984720   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa (-rw-------)
	I0815 18:36:23.984752   68248 main.go:141] libmachine: (embed-certs-555028) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:36:23.984763   68248 main.go:141] libmachine: (embed-certs-555028) DBG | About to run SSH command:
	I0815 18:36:23.984772   68248 main.go:141] libmachine: (embed-certs-555028) DBG | exit 0
	I0815 18:36:24.104618   68248 main.go:141] libmachine: (embed-certs-555028) DBG | SSH cmd err, output: <nil>: 
	I0815 18:36:24.105023   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetConfigRaw
	I0815 18:36:24.105694   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:24.108191   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.108532   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.108568   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.108844   68248 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/config.json ...
	I0815 18:36:24.109037   68248 machine.go:93] provisionDockerMachine start ...
	I0815 18:36:24.109055   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:24.109249   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.111363   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.111680   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.111725   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.111821   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.111989   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.112141   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.112277   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.112454   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.112662   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.112673   68248 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:36:24.208951   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:36:24.208986   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.209255   68248 buildroot.go:166] provisioning hostname "embed-certs-555028"
	I0815 18:36:24.209285   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.209514   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.212393   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.212850   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.212878   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.213010   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.213198   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.213340   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.213466   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.213663   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.213821   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.213832   68248 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-555028 && echo "embed-certs-555028" | sudo tee /etc/hostname
	I0815 18:36:24.327157   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-555028
	
	I0815 18:36:24.327191   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.330193   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.330577   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.330607   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.330824   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.331029   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.331174   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.331325   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.331508   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.331713   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.331732   68248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-555028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-555028/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-555028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:36:24.437909   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:24.437938   68248 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:36:24.437977   68248 buildroot.go:174] setting up certificates
	I0815 18:36:24.437987   68248 provision.go:84] configureAuth start
	I0815 18:36:24.437996   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.438264   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:24.440637   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.440961   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.440993   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.441089   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.443071   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.443415   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.443448   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.443562   68248 provision.go:143] copyHostCerts
	I0815 18:36:24.443622   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:36:24.443643   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:36:24.443726   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:36:24.443843   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:36:24.443855   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:36:24.443893   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:36:24.443968   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:36:24.443977   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:36:24.444007   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:36:24.444074   68248 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.embed-certs-555028 san=[127.0.0.1 192.168.50.234 embed-certs-555028 localhost minikube]
	I0815 18:36:24.507119   68248 provision.go:177] copyRemoteCerts
	I0815 18:36:24.507177   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:36:24.507202   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.509835   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.510230   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.510260   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.510403   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.510606   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.510735   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.510842   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:24.590623   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:36:24.615635   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 18:36:24.643400   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:36:24.670394   68248 provision.go:87] duration metric: took 232.396705ms to configureAuth
	I0815 18:36:24.670415   68248 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:36:24.670609   68248 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:24.670694   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.673303   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.673685   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.673721   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.673863   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.674050   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.674222   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.674354   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.674513   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.674673   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.674688   68248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:36:25.149223   68429 start.go:364] duration metric: took 3m59.233021018s to acquireMachinesLock for "default-k8s-diff-port-423062"
	I0815 18:36:25.149295   68429 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:25.149306   68429 fix.go:54] fixHost starting: 
	I0815 18:36:25.149757   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:25.149799   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:25.166940   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41811
	I0815 18:36:25.167342   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:25.167882   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:25.167910   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:25.168179   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:25.168383   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:25.168553   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:25.170072   68429 fix.go:112] recreateIfNeeded on default-k8s-diff-port-423062: state=Stopped err=<nil>
	I0815 18:36:25.170106   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	W0815 18:36:25.170263   68429 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:25.172091   68429 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-423062" ...
	I0815 18:36:25.173641   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Start
	I0815 18:36:25.173831   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring networks are active...
	I0815 18:36:25.174594   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring network default is active
	I0815 18:36:25.174981   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring network mk-default-k8s-diff-port-423062 is active
	I0815 18:36:25.175410   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Getting domain xml...
	I0815 18:36:25.176275   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Creating domain...
	I0815 18:36:24.928110   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:36:24.928140   68248 machine.go:96] duration metric: took 819.089931ms to provisionDockerMachine
	I0815 18:36:24.928156   68248 start.go:293] postStartSetup for "embed-certs-555028" (driver="kvm2")
	I0815 18:36:24.928170   68248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:36:24.928190   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:24.928513   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:36:24.928542   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.931301   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.931756   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.931799   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.931846   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.932028   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.932311   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.932477   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.011373   68248 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:36:25.015677   68248 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:36:25.015707   68248 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:36:25.015798   68248 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:36:25.015900   68248 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:36:25.016014   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:36:25.025465   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:25.049662   68248 start.go:296] duration metric: took 121.491742ms for postStartSetup
	I0815 18:36:25.049704   68248 fix.go:56] duration metric: took 18.508823511s for fixHost
	I0815 18:36:25.049728   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.052184   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.052538   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.052583   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.052718   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.052904   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.053099   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.053271   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.053438   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:25.053604   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:25.053614   68248 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:36:25.149075   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723746985.122186042
	
	I0815 18:36:25.149095   68248 fix.go:216] guest clock: 1723746985.122186042
	I0815 18:36:25.149103   68248 fix.go:229] Guest: 2024-08-15 18:36:25.122186042 +0000 UTC Remote: 2024-08-15 18:36:25.049708543 +0000 UTC m=+260.258232753 (delta=72.477499ms)
	I0815 18:36:25.149131   68248 fix.go:200] guest clock delta is within tolerance: 72.477499ms
	I0815 18:36:25.149135   68248 start.go:83] releasing machines lock for "embed-certs-555028", held for 18.608287436s
	I0815 18:36:25.149158   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.149408   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:25.152125   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.152542   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.152568   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.152742   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153260   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153439   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153539   68248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:36:25.153587   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.153639   68248 ssh_runner.go:195] Run: cat /version.json
	I0815 18:36:25.153659   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.156311   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156504   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156740   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.156769   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156847   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.156883   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.157040   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.157122   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.157303   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.157318   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.157473   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.157479   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.157647   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.157647   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.233725   68248 ssh_runner.go:195] Run: systemctl --version
	I0815 18:36:25.253737   68248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:36:25.402047   68248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:36:25.409250   68248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:36:25.409328   68248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:36:25.426491   68248 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:36:25.426514   68248 start.go:495] detecting cgroup driver to use...
	I0815 18:36:25.426580   68248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:36:25.445177   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:36:25.459432   68248 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:36:25.459512   68248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:36:25.473777   68248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:36:25.488144   68248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:36:25.627700   68248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:36:25.791278   68248 docker.go:233] disabling docker service ...
	I0815 18:36:25.791349   68248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:36:25.810146   68248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:36:25.825131   68248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:36:25.975457   68248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:36:26.106757   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:36:26.123053   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:36:26.142739   68248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:36:26.142804   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.153163   68248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:36:26.153217   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.163863   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.175028   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.192480   68248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:36:26.208933   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.219825   68248 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.245623   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.256645   68248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:36:26.265947   68248 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:36:26.266004   68248 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:36:26.278665   68248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:36:26.289519   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:26.423656   68248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:36:26.560919   68248 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:36:26.560996   68248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:36:26.565696   68248 start.go:563] Will wait 60s for crictl version
	I0815 18:36:26.565764   68248 ssh_runner.go:195] Run: which crictl
	I0815 18:36:26.569498   68248 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:36:26.609872   68248 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:36:26.609948   68248 ssh_runner.go:195] Run: crio --version
	I0815 18:36:26.645300   68248 ssh_runner.go:195] Run: crio --version
	I0815 18:36:26.681229   68248 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:36:26.682461   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:26.685495   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:26.686011   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:26.686037   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:26.686323   68248 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 18:36:26.690590   68248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:26.703512   68248 kubeadm.go:883] updating cluster {Name:embed-certs-555028 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:36:26.703679   68248 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:36:26.703748   68248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:26.740601   68248 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:36:26.740679   68248 ssh_runner.go:195] Run: which lz4
	I0815 18:36:26.744798   68248 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:36:26.748894   68248 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:36:26.748921   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:36:28.188174   68248 crio.go:462] duration metric: took 1.443420751s to copy over tarball
	I0815 18:36:28.188254   68248 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:36:26.428013   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting to get IP...
	I0815 18:36:26.428929   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.429397   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.429513   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.429391   69513 retry.go:31] will retry after 296.45967ms: waiting for machine to come up
	I0815 18:36:26.727871   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.728273   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.728298   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.728237   69513 retry.go:31] will retry after 258.379179ms: waiting for machine to come up
	I0815 18:36:26.988915   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.989398   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.989472   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.989374   69513 retry.go:31] will retry after 418.611169ms: waiting for machine to come up
	I0815 18:36:27.409905   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.410358   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.410398   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:27.410327   69513 retry.go:31] will retry after 566.642237ms: waiting for machine to come up
	I0815 18:36:27.978717   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.979183   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.979215   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:27.979125   69513 retry.go:31] will retry after 740.292473ms: waiting for machine to come up
	I0815 18:36:28.720587   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:28.720970   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:28.721008   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:28.720941   69513 retry.go:31] will retry after 610.435484ms: waiting for machine to come up
	I0815 18:36:29.333342   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:29.333696   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:29.333731   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:29.333632   69513 retry.go:31] will retry after 1.059086771s: waiting for machine to come up
	I0815 18:36:30.394125   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:30.394560   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:30.394589   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:30.394519   69513 retry.go:31] will retry after 1.279753887s: waiting for machine to come up
	I0815 18:36:30.309340   68248 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.121056035s)
	I0815 18:36:30.309382   68248 crio.go:469] duration metric: took 2.121176349s to extract the tarball
	I0815 18:36:30.309394   68248 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:36:30.346520   68248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:30.394771   68248 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:36:30.394789   68248 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:36:30.394799   68248 kubeadm.go:934] updating node { 192.168.50.234 8443 v1.31.0 crio true true} ...
	I0815 18:36:30.394951   68248 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-555028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:36:30.395033   68248 ssh_runner.go:195] Run: crio config
	I0815 18:36:30.439636   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:36:30.439663   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:30.439678   68248 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:36:30.439707   68248 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.234 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-555028 NodeName:embed-certs-555028 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:36:30.439899   68248 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-555028"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:36:30.439976   68248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:36:30.449774   68248 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:36:30.449842   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:36:30.458892   68248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0815 18:36:30.475171   68248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:36:30.490942   68248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0815 18:36:30.507498   68248 ssh_runner.go:195] Run: grep 192.168.50.234	control-plane.minikube.internal$ /etc/hosts
	I0815 18:36:30.511254   68248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:30.522772   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:30.646060   68248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:30.667948   68248 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028 for IP: 192.168.50.234
	I0815 18:36:30.667974   68248 certs.go:194] generating shared ca certs ...
	I0815 18:36:30.667994   68248 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:30.668178   68248 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:36:30.668231   68248 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:36:30.668244   68248 certs.go:256] generating profile certs ...
	I0815 18:36:30.668360   68248 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/client.key
	I0815 18:36:30.668442   68248 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.key.539203f3
	I0815 18:36:30.668524   68248 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.key
	I0815 18:36:30.668686   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:36:30.668725   68248 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:36:30.668737   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:36:30.668774   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:36:30.668807   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:36:30.668836   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:36:30.668941   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:30.669810   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:36:30.721245   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:36:30.753016   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:36:30.782005   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:36:30.815008   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 18:36:30.847615   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:36:30.871566   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:36:30.894778   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:36:30.919167   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:36:30.942597   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:36:30.965395   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:36:30.988959   68248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:36:31.005578   68248 ssh_runner.go:195] Run: openssl version
	I0815 18:36:31.011697   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:36:31.022496   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.027102   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.027154   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.033475   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:36:31.044793   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:36:31.055793   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.060642   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.060692   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.066544   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:36:31.077637   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:36:31.088468   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.093295   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.093347   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.098908   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:36:31.109856   68248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:36:31.114519   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:36:31.120709   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:36:31.126754   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:36:31.132917   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:36:31.138739   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:36:31.144785   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:36:31.150604   68248 kubeadm.go:392] StartCluster: {Name:embed-certs-555028 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:36:31.150702   68248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:36:31.150755   68248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:31.192152   68248 cri.go:89] found id: ""
	I0815 18:36:31.192253   68248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:36:31.203076   68248 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:36:31.203099   68248 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:36:31.203151   68248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:36:31.213659   68248 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:36:31.215070   68248 kubeconfig.go:125] found "embed-certs-555028" server: "https://192.168.50.234:8443"
	I0815 18:36:31.218243   68248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:36:31.228210   68248 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.234
	I0815 18:36:31.228245   68248 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:36:31.228267   68248 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:36:31.228317   68248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:31.275944   68248 cri.go:89] found id: ""
	I0815 18:36:31.276031   68248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:36:31.294466   68248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:36:31.307241   68248 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:36:31.307276   68248 kubeadm.go:157] found existing configuration files:
	
	I0815 18:36:31.307327   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:36:31.316654   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:36:31.316722   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:36:31.326475   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:36:31.335726   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:36:31.335796   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:36:31.345063   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:36:31.353576   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:36:31.353628   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:36:31.362449   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:36:31.370717   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:36:31.370792   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:36:31.379827   68248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:36:31.389001   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:31.510611   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.158537   68248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.647891555s)
	I0815 18:36:33.158574   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.376600   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.459742   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.545503   68248 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:36:33.545595   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.046191   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.546256   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.571236   68248 api_server.go:72] duration metric: took 1.025744612s to wait for apiserver process to appear ...
	I0815 18:36:34.571275   68248 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:36:34.571297   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:31.675513   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:31.676013   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:31.676042   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:31.675960   69513 retry.go:31] will retry after 1.669099573s: waiting for machine to come up
	I0815 18:36:33.348089   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:33.348611   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:33.348639   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:33.348575   69513 retry.go:31] will retry after 1.613394267s: waiting for machine to come up
	I0815 18:36:34.963674   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:34.964187   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:34.964215   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:34.964146   69513 retry.go:31] will retry after 2.128578928s: waiting for machine to come up
	I0815 18:36:37.262138   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:37.262168   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:37.262184   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:37.310539   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:37.310569   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:37.571713   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:37.590002   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:37.590062   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:38.071526   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:38.076179   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:38.076212   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:38.571714   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:38.576518   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I0815 18:36:38.582358   68248 api_server.go:141] control plane version: v1.31.0
	I0815 18:36:38.582381   68248 api_server.go:131] duration metric: took 4.011097638s to wait for apiserver health ...
	I0815 18:36:38.582393   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:36:38.582401   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:38.584203   68248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:36:38.585513   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:36:38.604350   68248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:36:38.645538   68248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:36:38.653445   68248 system_pods.go:59] 8 kube-system pods found
	I0815 18:36:38.653476   68248 system_pods.go:61] "coredns-6f6b679f8f-sjx7c" [93a037b9-1e7c-471a-b62d-d7898b2b8287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:36:38.653486   68248 system_pods.go:61] "etcd-embed-certs-555028" [7e526b10-7acd-4d25-9847-8e11e21ba8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:36:38.653495   68248 system_pods.go:61] "kube-apiserver-embed-certs-555028" [3f317b0f-15a1-4e7d-8ca5-3cdf70dee711] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:36:38.653501   68248 system_pods.go:61] "kube-controller-manager-embed-certs-555028" [431113cd-bce9-4ecb-8233-c5463875f1b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:36:38.653506   68248 system_pods.go:61] "kube-proxy-dzwt7" [a8101c7e-c010-45a3-8746-0dc20c7ef0e2] Running
	I0815 18:36:38.653513   68248 system_pods.go:61] "kube-scheduler-embed-certs-555028" [84a5d051-d8c1-4097-b92c-e2f0d7a03385] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:36:38.653520   68248 system_pods.go:61] "metrics-server-6867b74b74-wp5rn" [222160bf-6774-49a5-9f30-7582748c8a82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:36:38.653534   68248 system_pods.go:61] "storage-provisioner" [e88c8785-2d8b-47b6-850f-e6cda74a4f5a] Running
	I0815 18:36:38.653549   68248 system_pods.go:74] duration metric: took 7.98765ms to wait for pod list to return data ...
	I0815 18:36:38.653558   68248 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:36:38.656864   68248 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:36:38.656893   68248 node_conditions.go:123] node cpu capacity is 2
	I0815 18:36:38.656906   68248 node_conditions.go:105] duration metric: took 3.340245ms to run NodePressure ...
	I0815 18:36:38.656923   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:38.918518   68248 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:36:38.923148   68248 kubeadm.go:739] kubelet initialised
	I0815 18:36:38.923168   68248 kubeadm.go:740] duration metric: took 4.62305ms waiting for restarted kubelet to initialise ...
	I0815 18:36:38.923177   68248 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:38.927933   68248 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.934928   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.934953   68248 pod_ready.go:82] duration metric: took 6.994953ms for pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.934965   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.934974   68248 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.939533   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "etcd-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.939558   68248 pod_ready.go:82] duration metric: took 4.573835ms for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.939568   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "etcd-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.939575   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.943567   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.943590   68248 pod_ready.go:82] duration metric: took 4.004869ms for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.943601   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.943608   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.049176   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:39.049203   68248 pod_ready.go:82] duration metric: took 105.585473ms for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:39.049212   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:39.049219   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dzwt7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.449514   68248 pod_ready.go:93] pod "kube-proxy-dzwt7" in "kube-system" namespace has status "Ready":"True"
	I0815 18:36:39.449539   68248 pod_ready.go:82] duration metric: took 400.311062ms for pod "kube-proxy-dzwt7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.449548   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:37.094139   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:37.094640   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:37.094670   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:37.094583   69513 retry.go:31] will retry after 2.268267509s: waiting for machine to come up
	I0815 18:36:39.365595   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:39.365975   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:39.366007   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:39.365938   69513 retry.go:31] will retry after 3.286154075s: waiting for machine to come up
	I0815 18:36:44.301710   68713 start.go:364] duration metric: took 3m51.402501772s to acquireMachinesLock for "old-k8s-version-278865"
	I0815 18:36:44.301771   68713 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:44.301792   68713 fix.go:54] fixHost starting: 
	I0815 18:36:44.302227   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:44.302265   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:44.319819   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I0815 18:36:44.320335   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:44.320975   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:36:44.321003   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:44.321380   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:44.321572   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:36:44.321720   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetState
	I0815 18:36:44.323551   68713 fix.go:112] recreateIfNeeded on old-k8s-version-278865: state=Stopped err=<nil>
	I0815 18:36:44.323586   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	W0815 18:36:44.323748   68713 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:44.325761   68713 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-278865" ...
	I0815 18:36:41.456648   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:43.456917   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:42.653801   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.654221   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has current primary IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.654251   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Found IP for machine: 192.168.61.7
	I0815 18:36:42.654268   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Reserving static IP address...
	I0815 18:36:42.654714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-423062", mac: "52:54:00:83:9a:f2", ip: "192.168.61.7"} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.654759   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | skip adding static IP to network mk-default-k8s-diff-port-423062 - found existing host DHCP lease matching {name: "default-k8s-diff-port-423062", mac: "52:54:00:83:9a:f2", ip: "192.168.61.7"}
	I0815 18:36:42.654778   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Reserved static IP address: 192.168.61.7
	I0815 18:36:42.654798   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for SSH to be available...
	I0815 18:36:42.654815   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Getting to WaitForSSH function...
	I0815 18:36:42.657618   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.657968   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.657996   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.658093   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Using SSH client type: external
	I0815 18:36:42.658115   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa (-rw-------)
	I0815 18:36:42.658200   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:36:42.658223   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | About to run SSH command:
	I0815 18:36:42.658234   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | exit 0
	I0815 18:36:42.780714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | SSH cmd err, output: <nil>: 
	I0815 18:36:42.781095   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetConfigRaw
	I0815 18:36:42.781734   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:42.784384   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.784820   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.784853   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.785137   68429 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/config.json ...
	I0815 18:36:42.785364   68429 machine.go:93] provisionDockerMachine start ...
	I0815 18:36:42.785390   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:42.785599   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:42.788083   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.788436   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.788465   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.788655   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:42.788833   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.789006   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.789145   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:42.789301   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:42.789607   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:42.789625   68429 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:36:42.889002   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:36:42.889031   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:42.889237   68429 buildroot.go:166] provisioning hostname "default-k8s-diff-port-423062"
	I0815 18:36:42.889260   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:42.889434   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:42.892072   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.892422   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.892445   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.892645   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:42.892846   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.892995   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.893148   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:42.893286   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:42.893490   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:42.893505   68429 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-423062 && echo "default-k8s-diff-port-423062" | sudo tee /etc/hostname
	I0815 18:36:43.008310   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-423062
	
	I0815 18:36:43.008347   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.011091   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.011446   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.011472   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.011653   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.011864   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.012027   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.012159   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.012321   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:43.012518   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:43.012537   68429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-423062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-423062/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-423062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:36:43.121095   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:43.121123   68429 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:36:43.121157   68429 buildroot.go:174] setting up certificates
	I0815 18:36:43.121172   68429 provision.go:84] configureAuth start
	I0815 18:36:43.121186   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:43.121510   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:43.123863   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.124178   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.124200   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.124312   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.126385   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.126633   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.126667   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.126784   68429 provision.go:143] copyHostCerts
	I0815 18:36:43.126861   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:36:43.126884   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:36:43.126944   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:36:43.127052   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:36:43.127062   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:36:43.127090   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:36:43.127177   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:36:43.127187   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:36:43.127215   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:36:43.127286   68429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-423062 san=[127.0.0.1 192.168.61.7 default-k8s-diff-port-423062 localhost minikube]
	I0815 18:36:43.627396   68429 provision.go:177] copyRemoteCerts
	I0815 18:36:43.627460   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:36:43.627485   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.630025   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.630311   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.630340   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.630479   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.630670   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.630850   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.630976   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:43.712571   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:36:43.738904   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0815 18:36:43.764328   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 18:36:43.787211   68429 provision.go:87] duration metric: took 666.026026ms to configureAuth
	I0815 18:36:43.787241   68429 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:36:43.787467   68429 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:43.787567   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.789803   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.790210   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.790232   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.790432   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.790604   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.790729   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.790905   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.791021   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:43.791169   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:43.791187   68429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:36:44.067277   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:36:44.067307   68429 machine.go:96] duration metric: took 1.281926749s to provisionDockerMachine
	I0815 18:36:44.067322   68429 start.go:293] postStartSetup for "default-k8s-diff-port-423062" (driver="kvm2")
	I0815 18:36:44.067335   68429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:36:44.067360   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.067711   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:36:44.067749   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.070224   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.070543   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.070573   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.070740   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.070925   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.071079   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.071269   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.152784   68429 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:36:44.157264   68429 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:36:44.157291   68429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:36:44.157364   68429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:36:44.157461   68429 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:36:44.157580   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:36:44.168520   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:44.195223   68429 start.go:296] duration metric: took 127.886016ms for postStartSetup
	I0815 18:36:44.195268   68429 fix.go:56] duration metric: took 19.045962302s for fixHost
	I0815 18:36:44.195292   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.197711   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.198065   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.198090   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.198281   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.198438   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.198614   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.198768   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.198959   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:44.199154   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:44.199172   68429 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:36:44.301519   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747004.273982003
	
	I0815 18:36:44.301543   68429 fix.go:216] guest clock: 1723747004.273982003
	I0815 18:36:44.301553   68429 fix.go:229] Guest: 2024-08-15 18:36:44.273982003 +0000 UTC Remote: 2024-08-15 18:36:44.195273929 +0000 UTC m=+258.412094909 (delta=78.708074ms)
	I0815 18:36:44.301598   68429 fix.go:200] guest clock delta is within tolerance: 78.708074ms
	I0815 18:36:44.301606   68429 start.go:83] releasing machines lock for "default-k8s-diff-port-423062", held for 19.152336719s
	I0815 18:36:44.301638   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.301903   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:44.305012   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.305498   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.305524   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.305742   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306240   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306425   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306533   68429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:36:44.306595   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.306689   68429 ssh_runner.go:195] Run: cat /version.json
	I0815 18:36:44.306714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.309649   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.309838   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310098   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.310133   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310250   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.310267   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.310296   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310434   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.310457   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.310634   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.310654   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.310794   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.310798   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.310947   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.412125   68429 ssh_runner.go:195] Run: systemctl --version
	I0815 18:36:44.420070   68429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:36:44.566014   68429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:36:44.572209   68429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:36:44.572283   68429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:36:44.593041   68429 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:36:44.593067   68429 start.go:495] detecting cgroup driver to use...
	I0815 18:36:44.593145   68429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:36:44.613683   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:36:44.627766   68429 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:36:44.627851   68429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:36:44.641172   68429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:36:44.654952   68429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:36:44.778684   68429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:36:44.965548   68429 docker.go:233] disabling docker service ...
	I0815 18:36:44.965631   68429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:36:44.983153   68429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:36:44.999109   68429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:36:45.131097   68429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:36:45.270930   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:36:45.287846   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:36:45.309345   68429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:36:45.309407   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.320032   68429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:36:45.320092   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.331647   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.342499   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.353192   68429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:36:45.364163   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.381124   68429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.403692   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.415032   68429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:36:45.424798   68429 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:36:45.424859   68429 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:36:45.439077   68429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:36:45.448554   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:45.570697   68429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:36:45.719575   68429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:36:45.719655   68429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:36:45.724415   68429 start.go:563] Will wait 60s for crictl version
	I0815 18:36:45.724476   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:36:45.728443   68429 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:36:45.770935   68429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:36:45.771023   68429 ssh_runner.go:195] Run: crio --version
	I0815 18:36:45.799588   68429 ssh_runner.go:195] Run: crio --version
	I0815 18:36:45.830915   68429 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:36:44.327259   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .Start
	I0815 18:36:44.327431   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring networks are active...
	I0815 18:36:44.328116   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network default is active
	I0815 18:36:44.328601   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network mk-old-k8s-version-278865 is active
	I0815 18:36:44.329081   68713 main.go:141] libmachine: (old-k8s-version-278865) Getting domain xml...
	I0815 18:36:44.331888   68713 main.go:141] libmachine: (old-k8s-version-278865) Creating domain...
	I0815 18:36:45.633882   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting to get IP...
	I0815 18:36:45.634842   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.635216   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.635286   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.635206   69670 retry.go:31] will retry after 300.377534ms: waiting for machine to come up
	I0815 18:36:45.937793   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.938290   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.938312   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.938236   69670 retry.go:31] will retry after 282.311084ms: waiting for machine to come up
	I0815 18:36:46.222856   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.223327   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.223350   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.223283   69670 retry.go:31] will retry after 354.299649ms: waiting for machine to come up
	I0815 18:36:46.578770   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.579337   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.579360   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.579241   69670 retry.go:31] will retry after 382.947645ms: waiting for machine to come up
	I0815 18:36:46.964003   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.964911   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.964943   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.964824   69670 retry.go:31] will retry after 710.757442ms: waiting for machine to come up
	I0815 18:36:47.676738   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:47.677422   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:47.677450   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:47.677360   69670 retry.go:31] will retry after 588.944709ms: waiting for machine to come up
	I0815 18:36:45.957776   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:48.456345   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:45.832411   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:45.835145   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:45.835523   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:45.835553   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:45.835762   68429 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 18:36:45.840347   68429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:45.854348   68429 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-423062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:36:45.854471   68429 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:36:45.854527   68429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:45.899238   68429 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:36:45.899320   68429 ssh_runner.go:195] Run: which lz4
	I0815 18:36:45.903367   68429 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:36:45.907499   68429 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:36:45.907526   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:36:47.317850   68429 crio.go:462] duration metric: took 1.414524229s to copy over tarball
	I0815 18:36:47.317929   68429 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:36:49.443172   68429 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.125212316s)
	I0815 18:36:49.443206   68429 crio.go:469] duration metric: took 2.125324606s to extract the tarball
	I0815 18:36:49.443215   68429 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:36:49.483693   68429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:49.535588   68429 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:36:49.535617   68429 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:36:49.535627   68429 kubeadm.go:934] updating node { 192.168.61.7 8444 v1.31.0 crio true true} ...
	I0815 18:36:49.535753   68429 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-423062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:36:49.535843   68429 ssh_runner.go:195] Run: crio config
	I0815 18:36:49.587186   68429 cni.go:84] Creating CNI manager for ""
	I0815 18:36:49.587215   68429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:49.587232   68429 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:36:49.587257   68429 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.7 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-423062 NodeName:default-k8s-diff-port-423062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:36:49.587447   68429 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-423062"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:36:49.587520   68429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:36:49.598312   68429 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:36:49.598376   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:36:49.608382   68429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0815 18:36:49.624449   68429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:36:49.647224   68429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0815 18:36:49.664848   68429 ssh_runner.go:195] Run: grep 192.168.61.7	control-plane.minikube.internal$ /etc/hosts
	I0815 18:36:49.668582   68429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:49.680786   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:49.804940   68429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:49.826104   68429 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062 for IP: 192.168.61.7
	I0815 18:36:49.826130   68429 certs.go:194] generating shared ca certs ...
	I0815 18:36:49.826147   68429 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:49.826281   68429 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:36:49.826322   68429 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:36:49.826331   68429 certs.go:256] generating profile certs ...
	I0815 18:36:49.826403   68429 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.key
	I0815 18:36:49.826461   68429 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.key.534debab
	I0815 18:36:49.826528   68429 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.key
	I0815 18:36:49.826667   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:36:49.826713   68429 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:36:49.826725   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:36:49.826748   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:36:49.826777   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:36:49.826810   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:36:49.826868   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:49.827597   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:36:49.855678   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:36:49.891292   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:36:49.928612   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:36:49.961506   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 18:36:49.993955   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:36:50.019275   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:36:50.046773   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:36:50.074201   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:36:50.101491   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:36:50.125378   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:36:50.149974   68429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:36:50.166393   68429 ssh_runner.go:195] Run: openssl version
	I0815 18:36:50.172182   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:36:50.182755   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.187110   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.187155   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.192956   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:36:50.203680   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:36:50.214269   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.218876   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.218925   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.224463   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:36:50.234811   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:36:50.245585   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.250397   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.250446   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.256189   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:36:50.267342   68429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:36:50.272011   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:36:50.278217   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:36:50.284300   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:36:50.290402   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:36:50.296174   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:36:50.301957   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:36:50.307807   68429 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-423062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:36:50.307910   68429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:36:50.307973   68429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:50.359833   68429 cri.go:89] found id: ""
	I0815 18:36:50.359923   68429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:36:50.370306   68429 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:36:50.370324   68429 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:36:50.370379   68429 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:36:50.379585   68429 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:36:50.380510   68429 kubeconfig.go:125] found "default-k8s-diff-port-423062" server: "https://192.168.61.7:8444"
	I0815 18:36:50.384136   68429 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:36:50.393393   68429 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.7
	I0815 18:36:50.393428   68429 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:36:50.393441   68429 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:36:50.393494   68429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:50.428085   68429 cri.go:89] found id: ""
	I0815 18:36:50.428162   68429 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:36:50.444032   68429 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:36:50.454927   68429 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:36:50.454948   68429 kubeadm.go:157] found existing configuration files:
	
	I0815 18:36:50.455000   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0815 18:36:50.464733   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:36:50.464797   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:36:50.473973   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0815 18:36:50.482861   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:36:50.482910   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:36:50.492213   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0815 18:36:50.501173   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:36:50.501230   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:36:50.510299   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0815 18:36:50.519262   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:36:50.519308   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:36:50.528632   68429 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:36:50.537914   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:50.655230   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:48.268221   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:48.268790   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:48.268814   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:48.268736   69670 retry.go:31] will retry after 781.489196ms: waiting for machine to come up
	I0815 18:36:49.051824   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:49.052246   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:49.052277   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:49.052182   69670 retry.go:31] will retry after 1.393037007s: waiting for machine to come up
	I0815 18:36:50.446428   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:50.446860   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:50.446892   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:50.446800   69670 retry.go:31] will retry after 1.826779004s: waiting for machine to come up
	I0815 18:36:52.275716   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:52.276208   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:52.276231   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:52.276167   69670 retry.go:31] will retry after 1.746726312s: waiting for machine to come up
	I0815 18:36:50.458388   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:52.147996   68248 pod_ready.go:93] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:36:52.148026   68248 pod_ready.go:82] duration metric: took 12.698470185s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:52.148039   68248 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:54.153927   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:51.670903   68429 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.015612511s)
	I0815 18:36:51.670943   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:51.985806   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:52.069082   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:52.189200   68429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:36:52.189298   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:52.689767   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:53.189633   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:53.205099   68429 api_server.go:72] duration metric: took 1.015908263s to wait for apiserver process to appear ...
	I0815 18:36:53.205136   68429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:36:53.205162   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:53.205695   68429 api_server.go:269] stopped: https://192.168.61.7:8444/healthz: Get "https://192.168.61.7:8444/healthz": dial tcp 192.168.61.7:8444: connect: connection refused
	I0815 18:36:53.705285   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:55.721139   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:55.721177   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:55.721193   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:55.750790   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:55.750825   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:56.205675   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:56.212464   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:56.212509   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:56.705700   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:56.716232   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:56.716277   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:57.205663   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:57.211081   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0815 18:36:57.217736   68429 api_server.go:141] control plane version: v1.31.0
	I0815 18:36:57.217763   68429 api_server.go:131] duration metric: took 4.012620084s to wait for apiserver health ...
	I0815 18:36:57.217772   68429 cni.go:84] Creating CNI manager for ""
	I0815 18:36:57.217778   68429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:57.219455   68429 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:36:54.025067   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:54.025508   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:54.025535   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:54.025462   69670 retry.go:31] will retry after 2.693215306s: waiting for machine to come up
	I0815 18:36:56.721740   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:56.722139   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:56.722178   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:56.722070   69670 retry.go:31] will retry after 3.370623363s: waiting for machine to come up
	I0815 18:36:57.220672   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:36:57.241710   68429 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:36:57.262714   68429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:36:57.272766   68429 system_pods.go:59] 8 kube-system pods found
	I0815 18:36:57.272822   68429 system_pods.go:61] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:36:57.272836   68429 system_pods.go:61] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:36:57.272849   68429 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:36:57.272862   68429 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:36:57.272872   68429 system_pods.go:61] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:36:57.272887   68429 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:36:57.272896   68429 system_pods.go:61] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:36:57.272902   68429 system_pods.go:61] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:36:57.272913   68429 system_pods.go:74] duration metric: took 10.175415ms to wait for pod list to return data ...
	I0815 18:36:57.272924   68429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:36:57.276880   68429 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:36:57.276915   68429 node_conditions.go:123] node cpu capacity is 2
	I0815 18:36:57.276929   68429 node_conditions.go:105] duration metric: took 3.998879ms to run NodePressure ...
	I0815 18:36:57.276951   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:57.554251   68429 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:36:57.558062   68429 kubeadm.go:739] kubelet initialised
	I0815 18:36:57.558084   68429 kubeadm.go:740] duration metric: took 3.811943ms waiting for restarted kubelet to initialise ...
	I0815 18:36:57.558091   68429 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:57.562470   68429 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.567212   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.567232   68429 pod_ready.go:82] duration metric: took 4.742538ms for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.567240   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.567245   68429 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.571217   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.571237   68429 pod_ready.go:82] duration metric: took 3.984908ms for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.571247   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.571255   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.575456   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.575494   68429 pod_ready.go:82] duration metric: took 4.232215ms for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.575507   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.575515   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.665876   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.665902   68429 pod_ready.go:82] duration metric: took 90.37918ms for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.665914   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.665921   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.066377   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-proxy-bnxv7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.066402   68429 pod_ready.go:82] duration metric: took 400.475025ms for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.066411   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-proxy-bnxv7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.066426   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.465739   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.465767   68429 pod_ready.go:82] duration metric: took 399.331024ms for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.465779   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.465787   68429 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.866772   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.866798   68429 pod_ready.go:82] duration metric: took 401.001046ms for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.866809   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.866817   68429 pod_ready.go:39] duration metric: took 1.308717049s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:58.866835   68429 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:36:58.878274   68429 ops.go:34] apiserver oom_adj: -16
	I0815 18:36:58.878298   68429 kubeadm.go:597] duration metric: took 8.507965813s to restartPrimaryControlPlane
	I0815 18:36:58.878308   68429 kubeadm.go:394] duration metric: took 8.570508558s to StartCluster
	I0815 18:36:58.878327   68429 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:58.878499   68429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:36:58.879927   68429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:58.880213   68429 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:36:58.880262   68429 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:36:58.880339   68429 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-423062"
	I0815 18:36:58.880375   68429 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-423062"
	I0815 18:36:58.880374   68429 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-423062"
	W0815 18:36:58.880383   68429 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:36:58.880367   68429 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-423062"
	I0815 18:36:58.880403   68429 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-423062"
	W0815 18:36:58.880410   68429 addons.go:243] addon metrics-server should already be in state true
	I0815 18:36:58.880414   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.880422   68429 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:58.880428   68429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-423062"
	I0815 18:36:58.880434   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.880772   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880778   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880801   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.880820   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.880826   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880855   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.882047   68429 out.go:177] * Verifying Kubernetes components...
	I0815 18:36:58.883440   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:58.895575   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I0815 18:36:58.895577   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37567
	I0815 18:36:58.895739   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39491
	I0815 18:36:58.896031   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896063   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896121   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896511   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896529   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896612   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896631   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896749   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896768   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896917   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.896963   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.897099   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.897132   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.897483   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.897527   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.897535   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.897558   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.900773   68429 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-423062"
	W0815 18:36:58.900796   68429 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:36:58.900825   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.901206   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.901238   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.912877   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0815 18:36:58.912903   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37245
	I0815 18:36:58.913271   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.913344   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.913835   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.913845   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.913852   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.913862   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.914177   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.914218   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.914361   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.914408   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.916165   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.916601   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.918553   68429 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:36:58.918560   68429 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:36:56.154697   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:58.654414   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:58.919539   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44177
	I0815 18:36:58.919773   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:36:58.919790   68429 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:36:58.919809   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.919884   68429 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:36:58.919900   68429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:36:58.919916   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.919945   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.920330   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.920343   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.920777   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.921363   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.921401   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.923262   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.923629   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.923656   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.923684   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.924108   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.924256   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.924319   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.924337   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.924501   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.924564   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.924688   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:58.924773   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.924944   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.925266   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:58.938064   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38697
	I0815 18:36:58.938411   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.938762   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.938782   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.939057   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.939214   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.941134   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.941395   68429 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:36:58.941414   68429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:36:58.941436   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.943936   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.944331   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.944355   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.944594   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.944765   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.944900   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.944977   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:59.069466   68429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:59.090259   68429 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-423062" to be "Ready" ...
	I0815 18:36:59.203591   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:36:59.232676   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:36:59.232705   68429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:36:59.273079   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:36:59.287625   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:36:59.287653   68429 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:36:59.359798   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:36:59.359821   68429 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:36:59.406350   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:00.373429   68429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16980511s)
	I0815 18:37:00.373477   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373495   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373501   68429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.10037967s)
	I0815 18:37:00.373546   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373563   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373787   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.373805   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.373848   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.373852   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.373863   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.373866   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.373890   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373903   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373879   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373937   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.374313   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.374322   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.374326   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.374344   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.374355   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.379434   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.379450   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.379666   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.379679   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.389853   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.389872   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.390148   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.390152   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.390173   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.390181   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.390189   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.390396   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.390447   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.390461   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.390475   68429 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-423062"
	I0815 18:37:00.392530   68429 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 18:37:00.393703   68429 addons.go:510] duration metric: took 1.51344438s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 18:37:00.093896   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:00.094391   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:37:00.094453   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:37:00.094333   69670 retry.go:31] will retry after 2.855023319s: waiting for machine to come up
	I0815 18:37:04.297557   67936 start.go:364] duration metric: took 52.755115386s to acquireMachinesLock for "no-preload-599042"
	I0815 18:37:04.297614   67936 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:37:04.297639   67936 fix.go:54] fixHost starting: 
	I0815 18:37:04.298066   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:04.298096   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:04.317897   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42493
	I0815 18:37:04.318309   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:04.318797   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:04.318822   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:04.319191   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:04.319388   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:04.319543   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:04.320970   67936 fix.go:112] recreateIfNeeded on no-preload-599042: state=Stopped err=<nil>
	I0815 18:37:04.320994   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	W0815 18:37:04.321164   67936 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:37:04.322689   67936 out.go:177] * Restarting existing kvm2 VM for "no-preload-599042" ...
	I0815 18:37:00.654833   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:03.154235   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:02.950449   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950903   68713 main.go:141] libmachine: (old-k8s-version-278865) Found IP for machine: 192.168.39.89
	I0815 18:37:02.950931   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has current primary IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950941   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserving static IP address...
	I0815 18:37:02.951319   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.951356   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | skip adding static IP to network mk-old-k8s-version-278865 - found existing host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"}
	I0815 18:37:02.951376   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserved static IP address: 192.168.39.89
	I0815 18:37:02.951393   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting for SSH to be available...
	I0815 18:37:02.951424   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Getting to WaitForSSH function...
	I0815 18:37:02.953498   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.953804   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953927   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH client type: external
	I0815 18:37:02.953957   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa (-rw-------)
	I0815 18:37:02.953989   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:37:02.954001   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | About to run SSH command:
	I0815 18:37:02.954009   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | exit 0
	I0815 18:37:03.076431   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | SSH cmd err, output: <nil>: 
	I0815 18:37:03.076748   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetConfigRaw
	I0815 18:37:03.077325   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.079733   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080100   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.080132   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080332   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:37:03.080537   68713 machine.go:93] provisionDockerMachine start ...
	I0815 18:37:03.080554   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:03.080717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.082778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083140   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.083168   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083331   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.083482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083612   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083730   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.083881   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.084067   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.084078   68713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:37:03.188779   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:37:03.188813   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189045   68713 buildroot.go:166] provisioning hostname "old-k8s-version-278865"
	I0815 18:37:03.189069   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189284   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.191858   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192171   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.192192   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192328   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.192533   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192676   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192822   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.193015   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.193180   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.193192   68713 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-278865 && echo "old-k8s-version-278865" | sudo tee /etc/hostname
	I0815 18:37:03.313099   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-278865
	
	I0815 18:37:03.313129   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.315840   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316196   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.316226   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316378   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.316608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316760   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316885   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.317001   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.317184   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.317207   68713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-278865' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-278865/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-278865' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:37:03.429897   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:37:03.429934   68713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:37:03.429962   68713 buildroot.go:174] setting up certificates
	I0815 18:37:03.429972   68713 provision.go:84] configureAuth start
	I0815 18:37:03.429983   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.430274   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.432724   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433053   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.433083   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433212   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.435181   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435514   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.435543   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435657   68713 provision.go:143] copyHostCerts
	I0815 18:37:03.435715   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:37:03.435736   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:37:03.435804   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:37:03.435919   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:37:03.435929   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:37:03.435959   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:37:03.436045   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:37:03.436055   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:37:03.436082   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:37:03.436170   68713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-278865 san=[127.0.0.1 192.168.39.89 localhost minikube old-k8s-version-278865]
	I0815 18:37:03.604924   68713 provision.go:177] copyRemoteCerts
	I0815 18:37:03.604979   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:37:03.605003   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.607328   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607616   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.607634   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607821   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.608016   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.608171   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.608429   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:03.690560   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:37:03.714632   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 18:37:03.737805   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:37:03.762338   68713 provision.go:87] duration metric: took 332.353741ms to configureAuth
	I0815 18:37:03.762371   68713 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:37:03.762543   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:37:03.762608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.765626   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.765988   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.766018   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.766211   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.766380   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766574   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766712   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.766897   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.767053   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.767069   68713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:37:04.050635   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:37:04.050663   68713 machine.go:96] duration metric: took 970.113556ms to provisionDockerMachine
	I0815 18:37:04.050674   68713 start.go:293] postStartSetup for "old-k8s-version-278865" (driver="kvm2")
	I0815 18:37:04.050685   68713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:37:04.050717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.051048   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:37:04.051081   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.053709   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054095   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.054124   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054432   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.054622   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.054774   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.054914   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.139381   68713 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:37:04.145097   68713 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:37:04.145124   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:37:04.145201   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:37:04.145298   68713 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:37:04.145421   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:37:04.156166   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:04.181562   68713 start.go:296] duration metric: took 130.872499ms for postStartSetup
	I0815 18:37:04.181605   68713 fix.go:56] duration metric: took 19.879821037s for fixHost
	I0815 18:37:04.181629   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.184268   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184652   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.184682   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184917   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.185151   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185345   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185502   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.185677   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:04.185925   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:04.185938   68713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:37:04.297391   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747024.271483326
	
	I0815 18:37:04.297413   68713 fix.go:216] guest clock: 1723747024.271483326
	I0815 18:37:04.297423   68713 fix.go:229] Guest: 2024-08-15 18:37:04.271483326 +0000 UTC Remote: 2024-08-15 18:37:04.181610291 +0000 UTC m=+251.426055371 (delta=89.873035ms)
	I0815 18:37:04.297448   68713 fix.go:200] guest clock delta is within tolerance: 89.873035ms
	I0815 18:37:04.297455   68713 start.go:83] releasing machines lock for "old-k8s-version-278865", held for 19.99571173s
	I0815 18:37:04.297504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.297818   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:04.300970   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301425   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.301455   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301609   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302194   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302404   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302495   68713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:37:04.302545   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.302679   68713 ssh_runner.go:195] Run: cat /version.json
	I0815 18:37:04.302705   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.305673   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.305903   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306066   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306092   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306273   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306301   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306337   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306537   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306657   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306664   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306827   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306834   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.307009   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.409319   68713 ssh_runner.go:195] Run: systemctl --version
	I0815 18:37:04.415576   68713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:37:04.565772   68713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:37:04.571909   68713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:37:04.571996   68713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:37:04.588400   68713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:37:04.588427   68713 start.go:495] detecting cgroup driver to use...
	I0815 18:37:04.588528   68713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:37:04.604253   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:37:04.619003   68713 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:37:04.619051   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:37:04.632530   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:37:04.646080   68713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:37:04.763855   68713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:37:04.922470   68713 docker.go:233] disabling docker service ...
	I0815 18:37:04.922566   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:37:04.937301   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:37:04.950721   68713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:37:05.079767   68713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:37:05.210207   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:37:05.225569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:37:05.247998   68713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 18:37:05.248070   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.262851   68713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:37:05.262924   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.274489   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.285901   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.298749   68713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:37:05.310052   68713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:37:05.320992   68713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:37:05.321073   68713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:37:05.340323   68713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:37:05.354069   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:05.483573   68713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:37:05.647020   68713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:37:05.647094   68713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:37:05.653850   68713 start.go:563] Will wait 60s for crictl version
	I0815 18:37:05.653924   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:05.658476   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:37:05.697818   68713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:37:05.697907   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.724931   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.755831   68713 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 18:37:01.094934   68429 node_ready.go:53] node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:37:03.594364   68429 node_ready.go:53] node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:37:05.756950   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:05.759791   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760188   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:05.760220   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760468   68713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:37:05.764753   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:05.777462   68713 kubeadm.go:883] updating cluster {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:37:05.777614   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:37:05.777679   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:05.848895   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:05.848967   68713 ssh_runner.go:195] Run: which lz4
	I0815 18:37:05.853103   68713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:37:05.858012   68713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:37:05.858046   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 18:37:07.520567   68713 crio.go:462] duration metric: took 1.667489785s to copy over tarball
	I0815 18:37:07.520642   68713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:37:04.324093   67936 main.go:141] libmachine: (no-preload-599042) Calling .Start
	I0815 18:37:04.324263   67936 main.go:141] libmachine: (no-preload-599042) Ensuring networks are active...
	I0815 18:37:04.325099   67936 main.go:141] libmachine: (no-preload-599042) Ensuring network default is active
	I0815 18:37:04.325778   67936 main.go:141] libmachine: (no-preload-599042) Ensuring network mk-no-preload-599042 is active
	I0815 18:37:04.326007   67936 main.go:141] libmachine: (no-preload-599042) Getting domain xml...
	I0815 18:37:04.328184   67936 main.go:141] libmachine: (no-preload-599042) Creating domain...
	I0815 18:37:05.626206   67936 main.go:141] libmachine: (no-preload-599042) Waiting to get IP...
	I0815 18:37:05.627374   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:05.627877   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:05.627935   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:05.627844   69876 retry.go:31] will retry after 199.774188ms: waiting for machine to come up
	I0815 18:37:05.829673   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:05.830213   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:05.830240   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:05.830170   69876 retry.go:31] will retry after 255.850483ms: waiting for machine to come up
	I0815 18:37:06.087766   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:06.088378   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:06.088405   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:06.088330   69876 retry.go:31] will retry after 351.231421ms: waiting for machine to come up
	I0815 18:37:06.440937   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:06.441597   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:06.441626   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:06.441572   69876 retry.go:31] will retry after 602.620924ms: waiting for machine to come up
	I0815 18:37:07.046269   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:07.046745   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:07.046769   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:07.046712   69876 retry.go:31] will retry after 578.450642ms: waiting for machine to come up
	I0815 18:37:07.627330   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:07.627832   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:07.627859   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:07.627791   69876 retry.go:31] will retry after 731.331176ms: waiting for machine to come up
	I0815 18:37:08.361310   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:08.361746   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:08.361776   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:08.361706   69876 retry.go:31] will retry after 1.089237688s: waiting for machine to come up
	I0815 18:37:05.157378   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:07.162990   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:09.654672   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:06.093822   68429 node_ready.go:49] node "default-k8s-diff-port-423062" has status "Ready":"True"
	I0815 18:37:06.093853   68429 node_ready.go:38] duration metric: took 7.003558244s for node "default-k8s-diff-port-423062" to be "Ready" ...
	I0815 18:37:06.093867   68429 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:06.103462   68429 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.111214   68429 pod_ready.go:93] pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:06.111235   68429 pod_ready.go:82] duration metric: took 7.746382ms for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.111244   68429 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.117713   68429 pod_ready.go:93] pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:06.117739   68429 pod_ready.go:82] duration metric: took 6.487608ms for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.117750   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:08.126216   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:10.128095   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:10.534169   68713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.013498464s)
	I0815 18:37:10.534194   68713 crio.go:469] duration metric: took 3.013602868s to extract the tarball
	I0815 18:37:10.534201   68713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:37:10.578998   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:10.619043   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:10.619146   68713 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:37:10.619246   68713 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.619247   68713 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.619278   68713 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 18:37:10.619275   68713 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.619291   68713 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.619304   68713 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.619322   68713 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.619405   68713 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621367   68713 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.621384   68713 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 18:37:10.621468   68713 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.621500   68713 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.621596   68713 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.621646   68713 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621706   68713 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.621897   68713 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.798617   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.828530   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 18:37:10.859528   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.918714   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.977028   68713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 18:37:10.977073   68713 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.977119   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:10.980573   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.985503   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.990642   68713 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 18:37:10.990684   68713 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 18:37:10.990733   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.000388   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.007526   68713 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 18:37:11.007589   68713 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.007642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.008543   68713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 18:37:11.008581   68713 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.008621   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.008642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077224   68713 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 18:37:11.077269   68713 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077228   68713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 18:37:11.077347   68713 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.077371   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111299   68713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 18:37:11.111376   68713 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.111387   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.111421   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111471   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.156942   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.156944   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.156997   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.263355   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.263448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.263455   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.263544   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.291407   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.312626   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.334606   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.427937   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.433739   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:11.435371   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.439448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.439541   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 18:37:11.450901   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.477906   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.520009   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 18:37:11.572349   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 18:37:11.686243   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 18:37:11.686295   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 18:37:11.686325   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 18:37:11.686378   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 18:37:11.686420   68713 cache_images.go:92] duration metric: took 1.067250234s to LoadCachedImages
	W0815 18:37:11.686494   68713 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0815 18:37:11.686508   68713 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.20.0 crio true true} ...
	I0815 18:37:11.686620   68713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-278865 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:37:11.686693   68713 ssh_runner.go:195] Run: crio config
	I0815 18:37:11.736781   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:37:11.736808   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:11.736824   68713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:37:11.736851   68713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-278865 NodeName:old-k8s-version-278865 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 18:37:11.737039   68713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-278865"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:37:11.737120   68713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 18:37:11.747511   68713 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:37:11.747585   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:37:11.757850   68713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 18:37:11.775982   68713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:37:11.792938   68713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 18:37:11.811576   68713 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I0815 18:37:11.815708   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:11.829992   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:11.983884   68713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:12.002603   68713 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865 for IP: 192.168.39.89
	I0815 18:37:12.002632   68713 certs.go:194] generating shared ca certs ...
	I0815 18:37:12.002682   68713 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.002867   68713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:37:12.002926   68713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:37:12.002942   68713 certs.go:256] generating profile certs ...
	I0815 18:37:12.025160   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.key
	I0815 18:37:12.025296   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a
	I0815 18:37:12.025351   68713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key
	I0815 18:37:12.025516   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:37:12.025578   68713 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:37:12.025591   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:37:12.025627   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:37:12.025661   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:37:12.025691   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:37:12.025746   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:12.026614   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:37:12.066771   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:37:12.109649   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:37:12.176744   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:37:12.207990   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 18:37:12.244999   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:37:12.282338   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:37:12.308761   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:37:12.332316   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:37:12.355977   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:37:12.379169   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:37:12.405472   68713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:37:12.424110   68713 ssh_runner.go:195] Run: openssl version
	I0815 18:37:12.430231   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:37:12.441531   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.445971   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.446061   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.452134   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:37:12.466809   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:37:12.478211   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482659   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482708   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.490225   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:37:12.504908   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:37:12.516825   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521854   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521911   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.527884   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:37:12.539398   68713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:37:12.544010   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:37:12.549918   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:37:12.555714   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:37:12.561895   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:37:12.567736   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:37:12.573664   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:37:12.579510   68713 kubeadm.go:392] StartCluster: {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:37:12.579627   68713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:37:12.579688   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.621503   68713 cri.go:89] found id: ""
	I0815 18:37:12.621576   68713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:37:12.632722   68713 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:37:12.632746   68713 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:37:12.632796   68713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:37:12.643192   68713 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:37:12.644607   68713 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-278865" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:37:12.645629   68713 kubeconfig.go:62] /home/jenkins/minikube-integration/19450-13013/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-278865" cluster setting kubeconfig missing "old-k8s-version-278865" context setting]
	I0815 18:37:12.647073   68713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.653052   68713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:37:12.665777   68713 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.89
	I0815 18:37:12.665808   68713 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:37:12.665821   68713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:37:12.665872   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.713574   68713 cri.go:89] found id: ""
	I0815 18:37:12.713641   68713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:37:12.731459   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:37:12.741769   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:37:12.741789   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:37:12.741833   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:37:12.750990   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:37:12.751049   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:37:12.761621   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:37:12.771204   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:37:12.771261   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:37:12.782012   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:37:09.452971   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:09.453451   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:09.453494   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:09.453393   69876 retry.go:31] will retry after 1.35461204s: waiting for machine to come up
	I0815 18:37:10.809664   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:10.810127   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:10.810158   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:10.810065   69876 retry.go:31] will retry after 1.709820883s: waiting for machine to come up
	I0815 18:37:12.521458   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:12.521988   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:12.522016   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:12.521930   69876 retry.go:31] will retry after 1.401971708s: waiting for machine to come up
	I0815 18:37:13.925401   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:13.925868   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:13.925898   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:13.925824   69876 retry.go:31] will retry after 2.768002946s: waiting for machine to come up
	I0815 18:37:11.655451   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:14.154561   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:12.400960   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:13.128357   68429 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.128379   68429 pod_ready.go:82] duration metric: took 7.010621879s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.128389   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.136617   68429 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.136638   68429 pod_ready.go:82] duration metric: took 8.242471ms for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.136648   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.143530   68429 pod_ready.go:93] pod "kube-proxy-bnxv7" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.143551   68429 pod_ready.go:82] duration metric: took 6.895931ms for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.143563   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.151691   68429 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.151721   68429 pod_ready.go:82] duration metric: took 8.149821ms for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.151735   68429 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:15.158172   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:12.791928   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:37:12.791994   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:37:12.801858   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:37:12.811023   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:37:12.811083   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:37:12.822189   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:37:12.834293   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:12.974325   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.452192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.690442   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.798270   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.900783   68713 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:37:13.900877   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.401954   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.901809   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.401755   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.901010   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.401794   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.901149   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:17.401599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.694999   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:16.695488   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:16.695506   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:16.695430   69876 retry.go:31] will retry after 2.308386075s: waiting for machine to come up
	I0815 18:37:16.154692   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:18.653763   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:17.159197   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:19.159442   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:17.901511   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.401720   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.900976   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.401223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.901522   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.901573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:22.401279   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.005581   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:19.005979   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:19.006008   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:19.005930   69876 retry.go:31] will retry after 2.758801207s: waiting for machine to come up
	I0815 18:37:21.766860   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.767286   67936 main.go:141] libmachine: (no-preload-599042) Found IP for machine: 192.168.72.14
	I0815 18:37:21.767303   67936 main.go:141] libmachine: (no-preload-599042) Reserving static IP address...
	I0815 18:37:21.767314   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has current primary IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.767722   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "no-preload-599042", mac: "52:54:00:d1:54:6d", ip: "192.168.72.14"} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.767745   67936 main.go:141] libmachine: (no-preload-599042) Reserved static IP address: 192.168.72.14
	I0815 18:37:21.767757   67936 main.go:141] libmachine: (no-preload-599042) DBG | skip adding static IP to network mk-no-preload-599042 - found existing host DHCP lease matching {name: "no-preload-599042", mac: "52:54:00:d1:54:6d", ip: "192.168.72.14"}
	I0815 18:37:21.767768   67936 main.go:141] libmachine: (no-preload-599042) DBG | Getting to WaitForSSH function...
	I0815 18:37:21.767780   67936 main.go:141] libmachine: (no-preload-599042) Waiting for SSH to be available...
	I0815 18:37:21.769674   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.769950   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.769973   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.770072   67936 main.go:141] libmachine: (no-preload-599042) DBG | Using SSH client type: external
	I0815 18:37:21.770103   67936 main.go:141] libmachine: (no-preload-599042) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa (-rw-------)
	I0815 18:37:21.770134   67936 main.go:141] libmachine: (no-preload-599042) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:37:21.770147   67936 main.go:141] libmachine: (no-preload-599042) DBG | About to run SSH command:
	I0815 18:37:21.770162   67936 main.go:141] libmachine: (no-preload-599042) DBG | exit 0
	I0815 18:37:21.888536   67936 main.go:141] libmachine: (no-preload-599042) DBG | SSH cmd err, output: <nil>: 
	I0815 18:37:21.888900   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetConfigRaw
	I0815 18:37:21.889541   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:21.892351   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.892730   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.892760   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.892976   67936 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/config.json ...
	I0815 18:37:21.893181   67936 machine.go:93] provisionDockerMachine start ...
	I0815 18:37:21.893203   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:21.893404   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:21.895471   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.895774   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.895812   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.895967   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:21.896153   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.896334   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.896522   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:21.896697   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:21.896872   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:21.896884   67936 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:37:21.992598   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:37:21.992622   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:21.992856   67936 buildroot.go:166] provisioning hostname "no-preload-599042"
	I0815 18:37:21.992884   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:21.993095   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:21.995586   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.995902   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.995930   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.996051   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:21.996239   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.996375   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.996538   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:21.996691   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:21.996869   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:21.996884   67936 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-599042 && echo "no-preload-599042" | sudo tee /etc/hostname
	I0815 18:37:22.106513   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-599042
	
	I0815 18:37:22.106553   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.109655   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.110111   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.110143   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.110362   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.110548   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.110718   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.110838   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.110970   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.111141   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.111162   67936 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-599042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-599042/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-599042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:37:22.221858   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:37:22.221898   67936 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:37:22.221924   67936 buildroot.go:174] setting up certificates
	I0815 18:37:22.221938   67936 provision.go:84] configureAuth start
	I0815 18:37:22.221956   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:22.222278   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:22.225058   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.225374   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.225410   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.225544   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.227539   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.227885   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.227929   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.228052   67936 provision.go:143] copyHostCerts
	I0815 18:37:22.228111   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:37:22.228126   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:37:22.228190   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:37:22.228273   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:37:22.228282   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:37:22.228301   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:37:22.228352   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:37:22.228359   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:37:22.228375   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:37:22.228428   67936 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.no-preload-599042 san=[127.0.0.1 192.168.72.14 localhost minikube no-preload-599042]
	I0815 18:37:22.383520   67936 provision.go:177] copyRemoteCerts
	I0815 18:37:22.383578   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:37:22.383601   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.386048   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.386303   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.386338   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.386566   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.386722   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.386894   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.387036   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:22.470828   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:37:22.494929   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:37:22.519545   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 18:37:22.544417   67936 provision.go:87] duration metric: took 322.465732ms to configureAuth
	I0815 18:37:22.544442   67936 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:37:22.544661   67936 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:37:22.544736   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.547284   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.547610   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.547641   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.547876   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.548076   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.548271   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.548413   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.548594   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.548795   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.548818   67936 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:37:22.803896   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:37:22.803924   67936 machine.go:96] duration metric: took 910.728961ms to provisionDockerMachine
	I0815 18:37:22.803935   67936 start.go:293] postStartSetup for "no-preload-599042" (driver="kvm2")
	I0815 18:37:22.803945   67936 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:37:22.803959   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:22.804274   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:37:22.804322   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.807041   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.807437   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.807467   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.807570   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.807747   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.807906   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.808002   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:22.887667   67936 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:37:22.892368   67936 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:37:22.892393   67936 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:37:22.892480   67936 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:37:22.892588   67936 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:37:22.892681   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:37:22.901987   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:22.927782   67936 start.go:296] duration metric: took 123.834401ms for postStartSetup
	I0815 18:37:22.927823   67936 fix.go:56] duration metric: took 18.630196933s for fixHost
	I0815 18:37:22.927848   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.930378   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.930728   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.930755   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.930868   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.931043   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.931226   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.931386   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.931538   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.931705   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.931718   67936 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:37:23.029393   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747042.997661196
	
	I0815 18:37:23.029423   67936 fix.go:216] guest clock: 1723747042.997661196
	I0815 18:37:23.029433   67936 fix.go:229] Guest: 2024-08-15 18:37:22.997661196 +0000 UTC Remote: 2024-08-15 18:37:22.927828036 +0000 UTC m=+353.975665928 (delta=69.83316ms)
	I0815 18:37:23.029455   67936 fix.go:200] guest clock delta is within tolerance: 69.83316ms
	I0815 18:37:23.029465   67936 start.go:83] releasing machines lock for "no-preload-599042", held for 18.731874864s
	I0815 18:37:23.029491   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.029730   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:23.031885   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.032242   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.032261   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.032449   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.032908   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.033062   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.033149   67936 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:37:23.033197   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:23.033303   67936 ssh_runner.go:195] Run: cat /version.json
	I0815 18:37:23.033322   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:23.035943   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.035987   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036327   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.036433   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.036463   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036482   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036657   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:23.036836   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:23.036855   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:23.036966   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:23.037039   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:23.037119   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:23.037183   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:23.037242   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:23.117399   67936 ssh_runner.go:195] Run: systemctl --version
	I0815 18:37:23.138614   67936 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:37:23.287862   67936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:37:23.293943   67936 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:37:23.294013   67936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:37:23.310957   67936 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:37:23.310987   67936 start.go:495] detecting cgroup driver to use...
	I0815 18:37:23.311067   67936 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:37:23.326641   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:37:23.340650   67936 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:37:23.340708   67936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:37:23.355401   67936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:37:23.369033   67936 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:37:23.480891   67936 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:37:23.629690   67936 docker.go:233] disabling docker service ...
	I0815 18:37:23.629782   67936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:37:23.644372   67936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:37:23.658312   67936 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:37:23.779999   67936 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:37:23.902630   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:37:23.917453   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:37:23.935696   67936 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:37:23.935749   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.946031   67936 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:37:23.946106   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.956639   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.967148   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.978049   67936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:37:23.989000   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.999290   67936 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:24.017002   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:24.027432   67936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:37:24.036714   67936 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:37:24.036770   67936 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:37:24.048956   67936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:37:24.058269   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:24.173548   67936 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:37:24.316383   67936 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:37:24.316462   67936 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:37:24.321726   67936 start.go:563] Will wait 60s for crictl version
	I0815 18:37:24.321803   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.325718   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:37:24.362995   67936 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:37:24.363099   67936 ssh_runner.go:195] Run: crio --version
	I0815 18:37:24.392678   67936 ssh_runner.go:195] Run: crio --version
	I0815 18:37:24.424128   67936 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:37:20.654186   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:23.154893   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:21.658499   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:24.159865   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:22.901608   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.401519   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.901287   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.401831   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.901547   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.401220   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.901109   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.401441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.901515   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:27.401258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.425451   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:24.428263   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:24.428631   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:24.428656   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:24.428833   67936 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 18:37:24.433343   67936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:24.446011   67936 kubeadm.go:883] updating cluster {Name:no-preload-599042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:37:24.446123   67936 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:37:24.446168   67936 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:24.484321   67936 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:37:24.484346   67936 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:37:24.484417   67936 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:24.484429   67936 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.484444   67936 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.484470   67936 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.484472   67936 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.484581   67936 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.484583   67936 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 18:37:24.484585   67936 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.485836   67936 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.485844   67936 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 18:37:24.485852   67936 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.485846   67936 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.485836   67936 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.485837   67936 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.485846   67936 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:24.485906   67936 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.646217   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.653405   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.658441   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.662835   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.662858   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.681979   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.715361   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0815 18:37:24.722352   67936 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0815 18:37:24.722391   67936 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.722450   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.787439   67936 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0815 18:37:24.787486   67936 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.787530   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.810570   67936 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0815 18:37:24.810606   67936 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0815 18:37:24.810612   67936 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.810630   67936 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.810666   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.810667   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.841566   67936 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0815 18:37:24.841617   67936 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.841669   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.841698   67936 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0815 18:37:24.841743   67936 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.841800   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.950875   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.950918   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.950933   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.950989   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.951004   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.951052   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.079551   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:25.079601   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:25.079634   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:25.084852   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:25.084874   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.084910   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:25.216095   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:25.216235   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:25.216308   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:25.216384   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:25.216400   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.216431   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:25.336055   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 18:37:25.336126   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 18:37:25.336180   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.336222   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:25.336181   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 18:37:25.336320   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:25.352527   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 18:37:25.352566   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 18:37:25.352592   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0815 18:37:25.352639   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:25.352650   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:25.352702   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:25.355747   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0815 18:37:25.355764   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.355769   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0815 18:37:25.355797   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.355806   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0815 18:37:25.363222   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0815 18:37:25.363257   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0815 18:37:25.363435   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0815 18:37:25.476601   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:28.142118   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.786287506s)
	I0815 18:37:28.142134   67936 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.665496935s)
	I0815 18:37:28.142146   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0815 18:37:28.142177   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:28.142190   67936 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0815 18:37:28.142220   67936 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:28.142244   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:28.142259   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:25.155516   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:27.156071   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:29.157389   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:26.658491   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:28.659080   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:27.901777   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.401103   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.901746   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.401521   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.901691   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.401326   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.901672   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.401534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.901013   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:32.401696   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.598348   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.456076001s)
	I0815 18:37:29.598380   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0815 18:37:29.598404   67936 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:29.598407   67936 ssh_runner.go:235] Completed: which crictl: (1.456124508s)
	I0815 18:37:29.598451   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:29.598474   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:31.495864   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.897383444s)
	I0815 18:37:31.495897   67936 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.897403663s)
	I0815 18:37:31.495902   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0815 18:37:31.495931   67936 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:31.495968   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:31.495968   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:31.657799   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:34.156377   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:31.158308   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:33.159177   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:35.668218   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:32.901441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.901095   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.401705   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.901020   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.401019   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.901094   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.400952   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.901717   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:37.401701   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.526372   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.030374686s)
	I0815 18:37:35.526410   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0815 18:37:35.526422   67936 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.030343547s)
	I0815 18:37:35.526438   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:35.526482   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:35.526483   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:35.570806   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 18:37:35.570906   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:37.500059   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.973499408s)
	I0815 18:37:37.500098   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0815 18:37:37.500120   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:37.500072   67936 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.929150036s)
	I0815 18:37:37.500208   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0815 18:37:37.500161   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:36.157239   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:38.656856   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:38.158685   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:40.158728   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:37.901353   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.401426   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.901599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.401173   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.901593   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.401758   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.401698   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:42.401409   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.563532   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.063281797s)
	I0815 18:37:39.563562   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0815 18:37:39.563595   67936 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:39.563642   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:40.208180   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 18:37:40.208232   67936 cache_images.go:123] Successfully loaded all cached images
	I0815 18:37:40.208240   67936 cache_images.go:92] duration metric: took 15.723882738s to LoadCachedImages
	I0815 18:37:40.208252   67936 kubeadm.go:934] updating node { 192.168.72.14 8443 v1.31.0 crio true true} ...
	I0815 18:37:40.208416   67936 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-599042 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:37:40.208544   67936 ssh_runner.go:195] Run: crio config
	I0815 18:37:40.261526   67936 cni.go:84] Creating CNI manager for ""
	I0815 18:37:40.261545   67936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:40.261552   67936 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:37:40.261572   67936 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.14 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-599042 NodeName:no-preload-599042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:37:40.261688   67936 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-599042"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:37:40.261742   67936 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:37:40.271844   67936 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:37:40.271921   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:37:40.280957   67936 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0815 18:37:40.297378   67936 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:37:40.313215   67936 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0815 18:37:40.329640   67936 ssh_runner.go:195] Run: grep 192.168.72.14	control-plane.minikube.internal$ /etc/hosts
	I0815 18:37:40.333331   67936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:40.344805   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:40.457352   67936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:40.475219   67936 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042 for IP: 192.168.72.14
	I0815 18:37:40.475238   67936 certs.go:194] generating shared ca certs ...
	I0815 18:37:40.475252   67936 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:40.475416   67936 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:37:40.475475   67936 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:37:40.475489   67936 certs.go:256] generating profile certs ...
	I0815 18:37:40.475591   67936 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.key
	I0815 18:37:40.475670   67936 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.key.15ba6898
	I0815 18:37:40.475714   67936 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.key
	I0815 18:37:40.475865   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:37:40.475904   67936 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:37:40.475917   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:37:40.475950   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:37:40.475978   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:37:40.476012   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:37:40.476069   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:40.476738   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:37:40.513554   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:37:40.549095   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:37:40.578010   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:37:40.612637   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 18:37:40.639974   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 18:37:40.672937   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:37:40.696889   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 18:37:40.721258   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:37:40.744104   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:37:40.766463   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:37:40.788628   67936 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:37:40.805346   67936 ssh_runner.go:195] Run: openssl version
	I0815 18:37:40.811193   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:37:40.822610   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.826918   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.826969   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.832544   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:37:40.843338   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:37:40.854032   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.858512   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.858563   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.864247   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:37:40.874724   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:37:40.885538   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.889849   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.889899   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.895258   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:37:40.906841   67936 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:37:40.911629   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:37:40.918085   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:37:40.924194   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:37:40.930009   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:37:40.935634   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:37:40.941168   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:37:40.946761   67936 kubeadm.go:392] StartCluster: {Name:no-preload-599042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:37:40.946836   67936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:37:40.946874   67936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:40.990733   67936 cri.go:89] found id: ""
	I0815 18:37:40.990808   67936 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:37:41.002969   67936 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:37:41.002988   67936 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:37:41.003041   67936 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:37:41.013722   67936 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:37:41.015079   67936 kubeconfig.go:125] found "no-preload-599042" server: "https://192.168.72.14:8443"
	I0815 18:37:41.017905   67936 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:37:41.029240   67936 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.14
	I0815 18:37:41.029271   67936 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:37:41.029284   67936 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:37:41.029326   67936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:41.064689   67936 cri.go:89] found id: ""
	I0815 18:37:41.064754   67936 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:37:41.085195   67936 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:37:41.096355   67936 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:37:41.096375   67936 kubeadm.go:157] found existing configuration files:
	
	I0815 18:37:41.096425   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:37:41.106887   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:37:41.106941   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:37:41.117599   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:37:41.127956   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:37:41.128020   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:37:41.137384   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:37:41.146075   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:37:41.146122   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:37:41.156417   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:37:41.165287   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:37:41.165325   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:37:41.174245   67936 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:37:41.183335   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:41.314804   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.422591   67936 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.107749325s)
	I0815 18:37:42.422628   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.642065   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.710265   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.791233   67936 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:37:42.791334   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.291538   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.791682   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.831611   67936 api_server.go:72] duration metric: took 1.040390925s to wait for apiserver process to appear ...
	I0815 18:37:43.831641   67936 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:37:43.831662   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:43.832110   67936 api_server.go:269] stopped: https://192.168.72.14:8443/healthz: Get "https://192.168.72.14:8443/healthz": dial tcp 192.168.72.14:8443: connect: connection refused
	I0815 18:37:41.154701   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:43.655756   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:42.661385   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:45.158918   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:42.901106   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.401146   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.901869   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.401483   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.901302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.401505   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.901504   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.401025   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.901713   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:47.401588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.332554   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.112640   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:37:47.112668   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:37:47.112681   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.244211   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.244246   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:47.332375   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.339129   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.339153   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:47.831731   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.836308   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.836330   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:48.331914   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:48.336314   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:48.336347   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:48.831862   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:48.836012   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 200:
	ok
	I0815 18:37:48.842971   67936 api_server.go:141] control plane version: v1.31.0
	I0815 18:37:48.842996   67936 api_server.go:131] duration metric: took 5.011346791s to wait for apiserver health ...
	I0815 18:37:48.843008   67936 cni.go:84] Creating CNI manager for ""
	I0815 18:37:48.843015   67936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:48.844939   67936 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:37:48.846262   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:37:48.857335   67936 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:37:48.876370   67936 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:37:48.886582   67936 system_pods.go:59] 8 kube-system pods found
	I0815 18:37:48.886628   67936 system_pods.go:61] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:37:48.886640   67936 system_pods.go:61] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:37:48.886653   67936 system_pods.go:61] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:37:48.886666   67936 system_pods.go:61] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:37:48.886679   67936 system_pods.go:61] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 18:37:48.886691   67936 system_pods.go:61] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:37:48.886701   67936 system_pods.go:61] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:37:48.886711   67936 system_pods.go:61] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 18:37:48.886722   67936 system_pods.go:74] duration metric: took 10.329234ms to wait for pod list to return data ...
	I0815 18:37:48.886736   67936 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:37:48.890525   67936 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:37:48.890560   67936 node_conditions.go:123] node cpu capacity is 2
	I0815 18:37:48.890571   67936 node_conditions.go:105] duration metric: took 3.828616ms to run NodePressure ...
	I0815 18:37:48.890590   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:46.155548   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:48.655549   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:49.183845   67936 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:37:49.188602   67936 kubeadm.go:739] kubelet initialised
	I0815 18:37:49.188629   67936 kubeadm.go:740] duration metric: took 4.755371ms waiting for restarted kubelet to initialise ...
	I0815 18:37:49.188639   67936 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:49.193101   67936 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.199195   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.199215   67936 pod_ready.go:82] duration metric: took 6.088761ms for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.199226   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.199236   67936 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.205076   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "etcd-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.205095   67936 pod_ready.go:82] duration metric: took 5.848521ms for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.205105   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "etcd-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.205111   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.210559   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-apiserver-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.210578   67936 pod_ready.go:82] duration metric: took 5.449861ms for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.210587   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-apiserver-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.210594   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.281799   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.281828   67936 pod_ready.go:82] duration metric: took 71.206144ms for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.281840   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.281850   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.680097   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-proxy-bwb9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.680121   67936 pod_ready.go:82] duration metric: took 398.261641ms for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.680131   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-proxy-bwb9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.680136   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:50.080391   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-scheduler-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.080415   67936 pod_ready.go:82] duration metric: took 400.272871ms for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:50.080425   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-scheduler-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.080430   67936 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:50.482715   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.482744   67936 pod_ready.go:82] duration metric: took 402.304556ms for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:50.482753   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.482761   67936 pod_ready.go:39] duration metric: took 1.294109816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:50.482779   67936 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:37:50.495888   67936 ops.go:34] apiserver oom_adj: -16
	I0815 18:37:50.495912   67936 kubeadm.go:597] duration metric: took 9.4929178s to restartPrimaryControlPlane
	I0815 18:37:50.495924   67936 kubeadm.go:394] duration metric: took 9.549167115s to StartCluster
	I0815 18:37:50.495943   67936 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:50.496020   67936 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:37:50.497743   67936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:50.497976   67936 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:37:50.498166   67936 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:37:50.498225   67936 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:37:50.498251   67936 addons.go:69] Setting storage-provisioner=true in profile "no-preload-599042"
	I0815 18:37:50.498266   67936 addons.go:69] Setting default-storageclass=true in profile "no-preload-599042"
	I0815 18:37:50.498287   67936 addons.go:234] Setting addon storage-provisioner=true in "no-preload-599042"
	I0815 18:37:50.498303   67936 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-599042"
	W0815 18:37:50.498311   67936 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:37:50.498343   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.498708   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.498733   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.498745   67936 addons.go:69] Setting metrics-server=true in profile "no-preload-599042"
	I0815 18:37:50.498753   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.498783   67936 addons.go:234] Setting addon metrics-server=true in "no-preload-599042"
	W0815 18:37:50.498795   67936 addons.go:243] addon metrics-server should already be in state true
	I0815 18:37:50.498734   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.499070   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.499350   67936 out.go:177] * Verifying Kubernetes components...
	I0815 18:37:50.499436   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.499467   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.500629   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:50.514727   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
	I0815 18:37:50.514956   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0815 18:37:50.515112   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.515379   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.515622   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.515639   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.515844   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.515866   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.516028   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.516697   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.516741   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.516854   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.517455   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.517487   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.517879   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0815 18:37:50.518180   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.518645   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.518666   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.518975   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.519155   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.522283   67936 addons.go:234] Setting addon default-storageclass=true in "no-preload-599042"
	W0815 18:37:50.522301   67936 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:37:50.522321   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.522589   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.522616   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.533306   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I0815 18:37:50.533891   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.534378   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.534403   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.535077   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.535251   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.536333   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I0815 18:37:50.536960   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.537421   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.537484   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.537500   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.537581   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40905
	I0815 18:37:50.537832   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.537992   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.538044   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.538964   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.538983   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.539442   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.539494   67936 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:37:50.540127   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.540138   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.540166   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.540633   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:37:50.540653   67936 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:37:50.540673   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.541641   67936 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:47.658449   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:50.159642   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:50.542848   67936 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:37:50.542867   67936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:37:50.542883   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.544059   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.544644   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.544669   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.544879   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.545056   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.545226   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.545363   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.545609   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.545957   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.545984   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.546188   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.546350   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.546459   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.546563   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.576049   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0815 18:37:50.576398   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.576963   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.576991   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.577315   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.577536   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.579041   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.579244   67936 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:37:50.579259   67936 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:37:50.579273   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.583471   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.583857   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.583884   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.583984   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.584140   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.584298   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.584431   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.711232   67936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:50.738297   67936 node_ready.go:35] waiting up to 6m0s for node "no-preload-599042" to be "Ready" ...
	I0815 18:37:50.787041   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:37:50.876459   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:37:50.926707   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:37:50.926727   67936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:37:50.967734   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:37:50.967764   67936 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:37:50.994557   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:50.994580   67936 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:37:51.018573   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:51.217167   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.217199   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.217511   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.217561   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.217570   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.217579   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.217592   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.217846   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.217889   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.217900   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.223755   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.223774   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.224006   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.224024   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.794888   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.794919   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.795198   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.795227   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.795240   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.795256   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.795267   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.795503   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.795521   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936158   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.936178   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.936438   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.936467   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.936505   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936519   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.936528   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.936754   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.936773   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936785   67936 addons.go:475] Verifying addon metrics-server=true in "no-preload-599042"
	I0815 18:37:51.938619   67936 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0815 18:37:47.901026   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.401023   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.901661   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.401358   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.901410   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.401040   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.901695   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.401365   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.901733   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:52.401439   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.939743   67936 addons.go:510] duration metric: took 1.441583595s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0815 18:37:52.742152   67936 node_ready.go:53] node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:51.155350   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:53.654487   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:52.658151   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:54.658269   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:52.901361   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.401417   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.901380   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.401820   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.901113   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.401270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.900941   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.901834   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:57.401496   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.242506   67936 node_ready.go:53] node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:57.742723   67936 node_ready.go:49] node "no-preload-599042" has status "Ready":"True"
	I0815 18:37:57.742746   67936 node_ready.go:38] duration metric: took 7.00442012s for node "no-preload-599042" to be "Ready" ...
	I0815 18:37:57.742764   67936 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:57.747927   67936 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:57.752478   67936 pod_ready.go:93] pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:57.752513   67936 pod_ready.go:82] duration metric: took 4.560553ms for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:57.752524   67936 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.760896   67936 pod_ready.go:93] pod "etcd-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.760924   67936 pod_ready.go:82] duration metric: took 1.008390436s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.760937   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.774529   67936 pod_ready.go:93] pod "kube-apiserver-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.774557   67936 pod_ready.go:82] duration metric: took 13.611063ms for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.774568   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.793851   67936 pod_ready.go:93] pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.793873   67936 pod_ready.go:82] duration metric: took 19.297089ms for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.793885   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.943096   67936 pod_ready.go:93] pod "kube-proxy-bwb9h" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.943120   67936 pod_ready.go:82] duration metric: took 149.227014ms for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.943129   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:56.154874   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:58.655280   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:57.158586   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:59.159257   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:57.901938   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.401246   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.900950   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.400984   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.401707   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.901455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.901613   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:02.401302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.342426   67936 pod_ready.go:93] pod "kube-scheduler-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:59.342447   67936 pod_ready.go:82] duration metric: took 399.312035ms for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:59.342460   67936 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	I0815 18:38:01.349419   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:03.848558   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:01.154194   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:03.154779   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:01.658502   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:04.158895   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:02.901914   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.401357   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.901258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.400961   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.401852   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.901115   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.401170   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.901694   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:07.401816   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.849586   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:08.349057   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:05.155847   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:07.653607   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:09.654245   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:06.658092   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:08.659361   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:07.900966   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.401136   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.901534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.400982   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.901126   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.401120   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.901175   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.401704   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.901710   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:12.401712   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.349443   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:12.349942   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:11.655212   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:14.154508   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:11.158562   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:13.657985   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:15.658088   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:12.901680   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.401532   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.901198   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:13.901295   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:13.938743   68713 cri.go:89] found id: ""
	I0815 18:38:13.938770   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.938778   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:13.938786   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:13.938843   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:13.971997   68713 cri.go:89] found id: ""
	I0815 18:38:13.972029   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.972041   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:13.972048   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:13.972111   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:14.006793   68713 cri.go:89] found id: ""
	I0815 18:38:14.006825   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.006836   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:14.006844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:14.006903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:14.041546   68713 cri.go:89] found id: ""
	I0815 18:38:14.041575   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.041587   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:14.041595   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:14.041680   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:14.077614   68713 cri.go:89] found id: ""
	I0815 18:38:14.077639   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.077648   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:14.077653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:14.077704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:14.113683   68713 cri.go:89] found id: ""
	I0815 18:38:14.113711   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.113721   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:14.113730   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:14.113790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:14.149581   68713 cri.go:89] found id: ""
	I0815 18:38:14.149608   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.149616   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:14.149622   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:14.149678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:14.191576   68713 cri.go:89] found id: ""
	I0815 18:38:14.191606   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.191614   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:14.191622   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:14.191635   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:14.243253   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:14.243287   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:14.256818   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:14.256845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:14.382914   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:14.382933   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:14.382948   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:14.461826   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:14.461859   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.005615   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:17.020977   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:17.021042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:17.070191   68713 cri.go:89] found id: ""
	I0815 18:38:17.070220   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.070232   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:17.070239   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:17.070301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:17.118582   68713 cri.go:89] found id: ""
	I0815 18:38:17.118612   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.118624   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:17.118631   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:17.118693   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:17.165380   68713 cri.go:89] found id: ""
	I0815 18:38:17.165404   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.165413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:17.165421   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:17.165483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:17.204630   68713 cri.go:89] found id: ""
	I0815 18:38:17.204660   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.204670   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:17.204678   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:17.204740   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:17.239182   68713 cri.go:89] found id: ""
	I0815 18:38:17.239210   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.239219   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:17.239226   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:17.239285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:17.276329   68713 cri.go:89] found id: ""
	I0815 18:38:17.276356   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.276367   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:17.276375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:17.276472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:17.312387   68713 cri.go:89] found id: ""
	I0815 18:38:17.312418   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.312427   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:17.312433   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:17.312485   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:17.348277   68713 cri.go:89] found id: ""
	I0815 18:38:17.348300   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.348308   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:17.348315   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:17.348334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:17.424886   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:17.424924   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.465491   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:17.465518   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:17.517687   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:17.517719   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:17.531928   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:17.531959   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:17.606987   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:14.849001   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:17.349912   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:16.155496   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:18.653621   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:18.159850   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.658717   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.107740   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:20.123194   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:20.123255   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:20.163586   68713 cri.go:89] found id: ""
	I0815 18:38:20.163608   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.163619   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:20.163627   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:20.163676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:20.200171   68713 cri.go:89] found id: ""
	I0815 18:38:20.200196   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.200204   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:20.200210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:20.200270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:20.234739   68713 cri.go:89] found id: ""
	I0815 18:38:20.234770   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.234781   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:20.234788   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:20.234849   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:20.270182   68713 cri.go:89] found id: ""
	I0815 18:38:20.270206   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.270215   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:20.270220   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:20.270281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:20.303643   68713 cri.go:89] found id: ""
	I0815 18:38:20.303672   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.303682   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:20.303690   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:20.303748   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:20.339399   68713 cri.go:89] found id: ""
	I0815 18:38:20.339431   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.339441   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:20.339449   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:20.339511   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:20.377220   68713 cri.go:89] found id: ""
	I0815 18:38:20.377245   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.377252   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:20.377258   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:20.377310   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:20.411202   68713 cri.go:89] found id: ""
	I0815 18:38:20.411238   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.411249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:20.411268   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:20.411282   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:20.462846   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:20.462879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:20.476569   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:20.476597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:20.554243   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:20.554269   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:20.554285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:20.637450   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:20.637493   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:19.849194   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:21.849502   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.655378   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.154633   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.160747   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:25.658706   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.182633   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:23.196953   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:23.197026   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:23.232011   68713 cri.go:89] found id: ""
	I0815 18:38:23.232039   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.232051   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:23.232064   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:23.232114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:23.266963   68713 cri.go:89] found id: ""
	I0815 18:38:23.266992   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.267000   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:23.267006   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:23.267069   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:23.306473   68713 cri.go:89] found id: ""
	I0815 18:38:23.306496   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.306504   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:23.306510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:23.306574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:23.343542   68713 cri.go:89] found id: ""
	I0815 18:38:23.343574   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.343585   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:23.343592   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:23.343652   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:23.382468   68713 cri.go:89] found id: ""
	I0815 18:38:23.382527   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.382539   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:23.382547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:23.382612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:23.418857   68713 cri.go:89] found id: ""
	I0815 18:38:23.418882   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.418891   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:23.418897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:23.418956   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:23.460971   68713 cri.go:89] found id: ""
	I0815 18:38:23.461004   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.461016   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:23.461023   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:23.461100   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:23.494139   68713 cri.go:89] found id: ""
	I0815 18:38:23.494172   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.494183   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:23.494194   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:23.494208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:23.547874   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:23.547908   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:23.562251   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:23.562278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:23.636503   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:23.636528   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:23.636545   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:23.716020   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:23.716051   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.255081   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:26.270118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:26.270184   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:26.308586   68713 cri.go:89] found id: ""
	I0815 18:38:26.308612   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.308623   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:26.308630   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:26.308688   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:26.344364   68713 cri.go:89] found id: ""
	I0815 18:38:26.344394   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.344410   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:26.344418   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:26.344533   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:26.381621   68713 cri.go:89] found id: ""
	I0815 18:38:26.381642   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.381650   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:26.381655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:26.381699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:26.416091   68713 cri.go:89] found id: ""
	I0815 18:38:26.416118   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.416128   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:26.416136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:26.416195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:26.456038   68713 cri.go:89] found id: ""
	I0815 18:38:26.456068   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.456080   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:26.456088   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:26.456151   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:26.490728   68713 cri.go:89] found id: ""
	I0815 18:38:26.490758   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.490769   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:26.490776   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:26.490837   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:26.529388   68713 cri.go:89] found id: ""
	I0815 18:38:26.529422   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.529434   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:26.529440   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:26.529489   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:26.567452   68713 cri.go:89] found id: ""
	I0815 18:38:26.567475   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.567484   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:26.567491   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:26.567503   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:26.641841   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:26.641863   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:26.641879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:26.719403   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:26.719438   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.760460   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:26.760507   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:26.814450   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:26.814480   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:24.349319   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:26.850207   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:25.155213   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:27.654265   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:29.656816   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:27.663849   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:30.158417   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:29.329451   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:29.344634   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:29.344706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:29.379278   68713 cri.go:89] found id: ""
	I0815 18:38:29.379308   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.379319   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:29.379326   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:29.379385   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:29.411854   68713 cri.go:89] found id: ""
	I0815 18:38:29.411881   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.411891   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:29.411898   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:29.411965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:29.443975   68713 cri.go:89] found id: ""
	I0815 18:38:29.444004   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.444014   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:29.444022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:29.444081   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:29.477919   68713 cri.go:89] found id: ""
	I0815 18:38:29.477944   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.477954   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:29.477962   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:29.478020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:29.518944   68713 cri.go:89] found id: ""
	I0815 18:38:29.518973   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.518985   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:29.518992   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:29.519052   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:29.553876   68713 cri.go:89] found id: ""
	I0815 18:38:29.553903   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.553913   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:29.553921   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:29.553974   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:29.590768   68713 cri.go:89] found id: ""
	I0815 18:38:29.590804   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.590815   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:29.590823   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:29.590879   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:29.625553   68713 cri.go:89] found id: ""
	I0815 18:38:29.625578   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.625586   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:29.625595   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:29.625606   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:29.668447   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:29.668478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:29.721002   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:29.721035   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:29.734955   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:29.734983   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:29.808703   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:29.808726   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:29.808742   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.397781   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:32.413876   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:32.413937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:32.453689   68713 cri.go:89] found id: ""
	I0815 18:38:32.453720   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.453777   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:32.453791   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:32.453839   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:32.490529   68713 cri.go:89] found id: ""
	I0815 18:38:32.490559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.490567   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:32.490573   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:32.490622   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:32.527680   68713 cri.go:89] found id: ""
	I0815 18:38:32.527710   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.527720   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:32.527727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:32.527790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:32.564619   68713 cri.go:89] found id: ""
	I0815 18:38:32.564656   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.564667   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:32.564677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:32.564745   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:32.600530   68713 cri.go:89] found id: ""
	I0815 18:38:32.600559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.600570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:32.600577   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:32.600639   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:32.636779   68713 cri.go:89] found id: ""
	I0815 18:38:32.636813   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.636821   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:32.636828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:32.636897   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:32.673743   68713 cri.go:89] found id: ""
	I0815 18:38:32.673774   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.673786   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:32.673794   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:32.673853   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:32.709678   68713 cri.go:89] found id: ""
	I0815 18:38:32.709708   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.709719   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:32.709730   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:32.709744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.785961   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:32.785998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:29.349763   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:31.350398   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:33.848873   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.155992   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:34.654825   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.159855   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:34.657783   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.828205   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:32.828237   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:32.894624   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:32.894666   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:32.910742   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:32.910769   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:32.980853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:35.481438   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:35.495373   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:35.495444   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:35.529184   68713 cri.go:89] found id: ""
	I0815 18:38:35.529212   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.529221   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:35.529226   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:35.529275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:35.565188   68713 cri.go:89] found id: ""
	I0815 18:38:35.565214   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.565221   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:35.565227   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:35.565281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:35.600386   68713 cri.go:89] found id: ""
	I0815 18:38:35.600416   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.600428   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:35.600435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:35.600519   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:35.634255   68713 cri.go:89] found id: ""
	I0815 18:38:35.634278   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.634287   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:35.634293   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:35.634339   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:35.670236   68713 cri.go:89] found id: ""
	I0815 18:38:35.670260   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.670268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:35.670273   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:35.670354   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:35.707691   68713 cri.go:89] found id: ""
	I0815 18:38:35.707714   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.707722   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:35.707727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:35.707782   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:35.745791   68713 cri.go:89] found id: ""
	I0815 18:38:35.745820   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.745832   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:35.745844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:35.745916   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:35.784167   68713 cri.go:89] found id: ""
	I0815 18:38:35.784195   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.784205   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:35.784217   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:35.784234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:35.864681   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:35.864711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:35.906831   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:35.906858   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:35.960328   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:35.960366   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:35.974401   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:35.974428   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:36.044789   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:35.849744   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:38.348058   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:36.654916   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:39.155585   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:36.658767   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:39.159236   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:38.545951   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:38.561473   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:38.561540   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:38.597621   68713 cri.go:89] found id: ""
	I0815 18:38:38.597658   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.597668   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:38.597679   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:38.597756   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:38.632081   68713 cri.go:89] found id: ""
	I0815 18:38:38.632115   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.632127   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:38.632135   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:38.632203   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:38.669917   68713 cri.go:89] found id: ""
	I0815 18:38:38.669944   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.669952   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:38.669958   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:38.670015   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:38.707552   68713 cri.go:89] found id: ""
	I0815 18:38:38.707574   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.707582   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:38.707588   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:38.707642   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:38.746054   68713 cri.go:89] found id: ""
	I0815 18:38:38.746082   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.746093   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:38.746101   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:38.746166   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:38.783901   68713 cri.go:89] found id: ""
	I0815 18:38:38.783933   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.783945   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:38.783952   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:38.784018   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:38.825411   68713 cri.go:89] found id: ""
	I0815 18:38:38.825441   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.825452   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:38.825459   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:38.825520   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:38.863174   68713 cri.go:89] found id: ""
	I0815 18:38:38.863219   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.863231   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:38.863241   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:38.863254   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:38.914016   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:38.914045   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:38.927634   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:38.927659   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:38.993380   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:38.993407   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:38.993422   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:39.077075   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:39.077116   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:41.620219   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:41.633572   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:41.633628   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:41.670330   68713 cri.go:89] found id: ""
	I0815 18:38:41.670351   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.670358   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:41.670364   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:41.670418   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:41.706467   68713 cri.go:89] found id: ""
	I0815 18:38:41.706494   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.706502   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:41.706508   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:41.706564   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:41.742915   68713 cri.go:89] found id: ""
	I0815 18:38:41.742958   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.742970   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:41.742978   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:41.743044   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:41.778650   68713 cri.go:89] found id: ""
	I0815 18:38:41.778679   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.778687   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:41.778692   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:41.778739   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:41.813329   68713 cri.go:89] found id: ""
	I0815 18:38:41.813358   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.813369   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:41.813375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:41.813427   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:41.851351   68713 cri.go:89] found id: ""
	I0815 18:38:41.851383   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.851391   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:41.851398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:41.851460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:41.895097   68713 cri.go:89] found id: ""
	I0815 18:38:41.895130   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.895142   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:41.895150   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:41.895209   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:41.931306   68713 cri.go:89] found id: ""
	I0815 18:38:41.931336   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.931353   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:41.931365   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:41.931381   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:41.944796   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:41.944828   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:42.018868   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:42.018891   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:42.018903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:42.104304   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:42.104340   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:42.143625   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:42.143655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:40.349197   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:42.850034   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:41.655478   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:44.155025   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:41.159976   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:43.658013   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:45.658358   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:44.698568   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:44.712171   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:44.712247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:44.747043   68713 cri.go:89] found id: ""
	I0815 18:38:44.747071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.747079   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:44.747085   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:44.747143   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:44.782660   68713 cri.go:89] found id: ""
	I0815 18:38:44.782691   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.782703   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:44.782711   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:44.782765   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:44.821111   68713 cri.go:89] found id: ""
	I0815 18:38:44.821138   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.821146   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:44.821152   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:44.821222   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:44.859602   68713 cri.go:89] found id: ""
	I0815 18:38:44.859635   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.859647   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:44.859655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:44.859717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:44.895037   68713 cri.go:89] found id: ""
	I0815 18:38:44.895071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.895083   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:44.895090   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:44.895175   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:44.928729   68713 cri.go:89] found id: ""
	I0815 18:38:44.928759   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.928771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:44.928781   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:44.928844   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:44.963945   68713 cri.go:89] found id: ""
	I0815 18:38:44.963977   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.963987   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:44.963996   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:44.964060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:45.001166   68713 cri.go:89] found id: ""
	I0815 18:38:45.001195   68713 logs.go:276] 0 containers: []
	W0815 18:38:45.001206   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:45.001218   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:45.001234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:45.015181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:45.015209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:45.084297   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:45.084322   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:45.084334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:45.173833   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:45.173866   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:45.211863   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:45.211899   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:47.771009   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:47.784865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:47.784926   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:44.850332   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.347985   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:46.654797   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:48.654936   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.658823   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:50.178115   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.818497   68713 cri.go:89] found id: ""
	I0815 18:38:47.818526   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.818538   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:47.818545   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:47.818608   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:47.857900   68713 cri.go:89] found id: ""
	I0815 18:38:47.857927   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.857935   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:47.857941   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:47.857997   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:47.895778   68713 cri.go:89] found id: ""
	I0815 18:38:47.895809   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.895822   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:47.895829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:47.895887   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:47.937410   68713 cri.go:89] found id: ""
	I0815 18:38:47.937434   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.937442   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:47.937448   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:47.937505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:47.976414   68713 cri.go:89] found id: ""
	I0815 18:38:47.976442   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.976450   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:47.976455   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:47.976525   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:48.014863   68713 cri.go:89] found id: ""
	I0815 18:38:48.014891   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.014899   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:48.014906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:48.014969   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:48.053508   68713 cri.go:89] found id: ""
	I0815 18:38:48.053536   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.053546   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:48.053554   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:48.053624   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:48.088900   68713 cri.go:89] found id: ""
	I0815 18:38:48.088931   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.088943   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:48.088954   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:48.088969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:48.140415   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:48.140447   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:48.155958   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:48.155985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:48.229338   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:48.229368   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:48.229383   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:48.317776   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:48.317814   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:50.860592   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:50.877070   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:50.877154   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:50.937590   68713 cri.go:89] found id: ""
	I0815 18:38:50.937615   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.937622   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:50.937628   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:50.937687   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:50.972573   68713 cri.go:89] found id: ""
	I0815 18:38:50.972603   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.972614   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:50.972622   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:50.972683   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:51.008786   68713 cri.go:89] found id: ""
	I0815 18:38:51.008811   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.008820   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:51.008826   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:51.008875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:51.043076   68713 cri.go:89] found id: ""
	I0815 18:38:51.043105   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.043116   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:51.043123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:51.043186   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:51.078344   68713 cri.go:89] found id: ""
	I0815 18:38:51.078379   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.078391   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:51.078398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:51.078453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:51.114494   68713 cri.go:89] found id: ""
	I0815 18:38:51.114521   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.114532   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:51.114540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:51.114600   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:51.153871   68713 cri.go:89] found id: ""
	I0815 18:38:51.153898   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.153909   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:51.153917   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:51.153984   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:51.187908   68713 cri.go:89] found id: ""
	I0815 18:38:51.187937   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.187948   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:51.187959   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:51.187974   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:51.264172   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:51.264198   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:51.264214   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:51.345238   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:51.345285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:51.385720   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:51.385745   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:51.443313   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:51.443353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:49.849156   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:52.348628   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:51.154188   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:53.155256   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:52.658773   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:54.659127   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:53.959176   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:53.972031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:53.972101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:54.010673   68713 cri.go:89] found id: ""
	I0815 18:38:54.010699   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.010707   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:54.010714   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:54.010775   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:54.045632   68713 cri.go:89] found id: ""
	I0815 18:38:54.045662   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.045672   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:54.045678   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:54.045727   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:54.082111   68713 cri.go:89] found id: ""
	I0815 18:38:54.082134   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.082142   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:54.082148   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:54.082206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:54.118210   68713 cri.go:89] found id: ""
	I0815 18:38:54.118232   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.118239   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:54.118246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:54.118305   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:54.155474   68713 cri.go:89] found id: ""
	I0815 18:38:54.155499   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.155508   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:54.155515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:54.155572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:54.193263   68713 cri.go:89] found id: ""
	I0815 18:38:54.193298   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.193305   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:54.193311   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:54.193365   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:54.233389   68713 cri.go:89] found id: ""
	I0815 18:38:54.233416   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.233428   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:54.233435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:54.233502   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:54.266127   68713 cri.go:89] found id: ""
	I0815 18:38:54.266155   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.266164   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:54.266176   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:54.266199   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:54.318724   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:54.318762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:54.332993   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:54.333022   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:54.405895   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:54.405915   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:54.405926   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:54.485819   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:54.485875   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.024956   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:57.038182   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:57.038246   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:57.078020   68713 cri.go:89] found id: ""
	I0815 18:38:57.078044   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.078055   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:57.078063   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:57.078127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:57.115077   68713 cri.go:89] found id: ""
	I0815 18:38:57.115101   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.115110   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:57.115118   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:57.115179   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:57.152711   68713 cri.go:89] found id: ""
	I0815 18:38:57.152737   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.152747   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:57.152755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:57.152819   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:57.190000   68713 cri.go:89] found id: ""
	I0815 18:38:57.190034   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.190042   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:57.190048   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:57.190096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:57.224947   68713 cri.go:89] found id: ""
	I0815 18:38:57.224978   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.224990   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:57.224998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:57.225060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:57.262329   68713 cri.go:89] found id: ""
	I0815 18:38:57.262365   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.262375   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:57.262383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:57.262458   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:57.299471   68713 cri.go:89] found id: ""
	I0815 18:38:57.299498   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.299507   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:57.299513   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:57.299572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:57.357163   68713 cri.go:89] found id: ""
	I0815 18:38:57.357202   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.357211   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:57.357220   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:57.357236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.405154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:57.405184   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:57.459245   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:57.459277   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:57.473663   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:57.473699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:57.546670   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:57.546699   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:57.546715   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:54.348864   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:56.848276   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:58.849461   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:55.655015   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:58.158306   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:56.662722   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:59.159559   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:00.124455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:00.137985   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:00.138053   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:00.175201   68713 cri.go:89] found id: ""
	I0815 18:39:00.175231   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.175242   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:00.175250   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:00.175328   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:00.209376   68713 cri.go:89] found id: ""
	I0815 18:39:00.209406   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.209418   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:00.209426   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:00.209484   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:00.246860   68713 cri.go:89] found id: ""
	I0815 18:39:00.246889   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.246899   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:00.246906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:00.246965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:00.282787   68713 cri.go:89] found id: ""
	I0815 18:39:00.282814   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.282823   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:00.282829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:00.282875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:00.330227   68713 cri.go:89] found id: ""
	I0815 18:39:00.330256   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.330268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:00.330275   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:00.330338   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:00.363028   68713 cri.go:89] found id: ""
	I0815 18:39:00.363061   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.363072   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:00.363079   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:00.363169   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:00.400484   68713 cri.go:89] found id: ""
	I0815 18:39:00.400522   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.400533   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:00.400540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:00.400597   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:00.436187   68713 cri.go:89] found id: ""
	I0815 18:39:00.436225   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.436238   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:00.436252   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:00.436267   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:00.481960   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:00.481985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:00.535103   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:00.535138   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:00.548541   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:00.548568   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:00.619476   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:00.619505   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:00.619525   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:01.347916   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.349448   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:00.654384   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.155048   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:01.658374   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.658824   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.206473   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:03.222893   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:03.222967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:03.272249   68713 cri.go:89] found id: ""
	I0815 18:39:03.272275   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.272283   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:03.272291   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:03.272355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:03.336786   68713 cri.go:89] found id: ""
	I0815 18:39:03.336811   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.336819   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:03.336825   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:03.336884   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:03.375977   68713 cri.go:89] found id: ""
	I0815 18:39:03.376002   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.376011   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:03.376016   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:03.376063   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:03.410304   68713 cri.go:89] found id: ""
	I0815 18:39:03.410326   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.410335   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:03.410340   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:03.410403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:03.446147   68713 cri.go:89] found id: ""
	I0815 18:39:03.446176   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.446188   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:03.446195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:03.446256   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:03.482555   68713 cri.go:89] found id: ""
	I0815 18:39:03.482582   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.482591   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:03.482597   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:03.482654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:03.519477   68713 cri.go:89] found id: ""
	I0815 18:39:03.519503   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.519511   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:03.519517   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:03.519574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:03.556539   68713 cri.go:89] found id: ""
	I0815 18:39:03.556566   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.556577   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:03.556587   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:03.556602   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:03.610553   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:03.610593   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:03.625799   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:03.625827   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:03.697106   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:03.697132   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:03.697149   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:03.779089   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:03.779120   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:06.319280   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:06.333284   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:06.333355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:06.369696   68713 cri.go:89] found id: ""
	I0815 18:39:06.369719   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.369727   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:06.369732   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:06.369780   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:06.405023   68713 cri.go:89] found id: ""
	I0815 18:39:06.405046   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.405053   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:06.405059   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:06.405113   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:06.439948   68713 cri.go:89] found id: ""
	I0815 18:39:06.439974   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.439983   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:06.439989   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:06.440048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:06.475613   68713 cri.go:89] found id: ""
	I0815 18:39:06.475642   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.475654   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:06.475664   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:06.475723   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:06.510698   68713 cri.go:89] found id: ""
	I0815 18:39:06.510721   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.510729   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:06.510735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:06.510783   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:06.545831   68713 cri.go:89] found id: ""
	I0815 18:39:06.545861   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.545873   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:06.545880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:06.545940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:06.579027   68713 cri.go:89] found id: ""
	I0815 18:39:06.579053   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.579064   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:06.579072   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:06.579132   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:06.615308   68713 cri.go:89] found id: ""
	I0815 18:39:06.615339   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.615352   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:06.615371   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:06.615396   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:06.671523   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:06.671555   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:06.685556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:06.685580   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:06.765036   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:06.765059   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:06.765071   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:06.843412   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:06.843457   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:05.849018   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:07.849342   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:05.654854   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:07.654910   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:09.655240   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:06.158409   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:08.657902   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:10.658258   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:09.390799   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:09.404099   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:09.404160   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:09.439534   68713 cri.go:89] found id: ""
	I0815 18:39:09.439563   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.439582   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:09.439591   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:09.439654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:09.478933   68713 cri.go:89] found id: ""
	I0815 18:39:09.478963   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.478974   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:09.478982   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:09.479042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:09.514396   68713 cri.go:89] found id: ""
	I0815 18:39:09.514425   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.514436   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:09.514444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:09.514510   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:09.547749   68713 cri.go:89] found id: ""
	I0815 18:39:09.547775   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.547785   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:09.547793   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:09.547856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:09.583583   68713 cri.go:89] found id: ""
	I0815 18:39:09.583611   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.583623   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:09.583631   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:09.583695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:09.616530   68713 cri.go:89] found id: ""
	I0815 18:39:09.616560   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.616570   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:09.616576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:09.616641   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:09.655167   68713 cri.go:89] found id: ""
	I0815 18:39:09.655189   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.655199   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:09.655207   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:09.655263   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:09.691368   68713 cri.go:89] found id: ""
	I0815 18:39:09.691391   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.691398   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:09.691411   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:09.691426   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:09.740739   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:09.740770   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:09.755049   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:09.755074   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:09.825053   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:09.825080   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:09.825095   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:09.903036   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:09.903076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:12.441898   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:12.454637   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:12.454712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:12.494604   68713 cri.go:89] found id: ""
	I0815 18:39:12.494632   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.494640   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:12.494646   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:12.494699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:12.531587   68713 cri.go:89] found id: ""
	I0815 18:39:12.531631   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.531642   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:12.531649   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:12.531710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:12.564991   68713 cri.go:89] found id: ""
	I0815 18:39:12.565014   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.565021   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:12.565027   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:12.565096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:12.600667   68713 cri.go:89] found id: ""
	I0815 18:39:12.600698   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.600709   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:12.600715   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:12.600777   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:12.633658   68713 cri.go:89] found id: ""
	I0815 18:39:12.633681   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.633691   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:12.633698   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:12.633759   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:12.673709   68713 cri.go:89] found id: ""
	I0815 18:39:12.673730   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.673737   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:12.673743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:12.673790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:12.707353   68713 cri.go:89] found id: ""
	I0815 18:39:12.707378   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.707385   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:12.707390   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:12.707437   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:12.746926   68713 cri.go:89] found id: ""
	I0815 18:39:12.746949   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.746957   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:12.746965   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:12.746977   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:09.853116   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:12.348297   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:11.655347   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:14.154929   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:13.158257   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:15.158457   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:12.792154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:12.792180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:12.843933   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:12.843968   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:12.859583   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:12.859609   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:12.940856   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:12.940880   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:12.940895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:15.520265   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:15.533677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:15.533754   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:15.572109   68713 cri.go:89] found id: ""
	I0815 18:39:15.572135   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.572145   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:15.572153   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:15.572221   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:15.607442   68713 cri.go:89] found id: ""
	I0815 18:39:15.607472   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.607484   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:15.607492   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:15.607551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:15.642206   68713 cri.go:89] found id: ""
	I0815 18:39:15.642230   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.642238   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:15.642246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:15.642308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:15.677914   68713 cri.go:89] found id: ""
	I0815 18:39:15.677945   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.677956   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:15.677963   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:15.678049   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:15.714466   68713 cri.go:89] found id: ""
	I0815 18:39:15.714496   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.714504   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:15.714510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:15.714563   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:15.750961   68713 cri.go:89] found id: ""
	I0815 18:39:15.750987   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.750995   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:15.751002   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:15.751050   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:15.785399   68713 cri.go:89] found id: ""
	I0815 18:39:15.785434   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.785444   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:15.785450   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:15.785498   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:15.821547   68713 cri.go:89] found id: ""
	I0815 18:39:15.821571   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.821578   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:15.821586   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:15.821597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:15.875299   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:15.875329   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:15.890376   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:15.890408   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:15.957317   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:15.957337   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:15.957352   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:16.033952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:16.033997   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:14.349171   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:16.849292   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.850822   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:16.654572   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.656041   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:17.657984   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:19.658366   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.571953   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:18.584652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:18.584721   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:18.617043   68713 cri.go:89] found id: ""
	I0815 18:39:18.617066   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.617073   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:18.617079   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:18.617127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:18.651080   68713 cri.go:89] found id: ""
	I0815 18:39:18.651112   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.651123   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:18.651130   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:18.651187   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:18.686857   68713 cri.go:89] found id: ""
	I0815 18:39:18.686890   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.686901   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:18.686909   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:18.686975   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:18.719397   68713 cri.go:89] found id: ""
	I0815 18:39:18.719434   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.719444   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:18.719452   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:18.719521   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:18.758316   68713 cri.go:89] found id: ""
	I0815 18:39:18.758349   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.758360   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:18.758366   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:18.758435   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:18.791586   68713 cri.go:89] found id: ""
	I0815 18:39:18.791609   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.791617   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:18.791623   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:18.791690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:18.827905   68713 cri.go:89] found id: ""
	I0815 18:39:18.827929   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.827937   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:18.827945   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:18.828004   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:18.869371   68713 cri.go:89] found id: ""
	I0815 18:39:18.869404   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.869412   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:18.869420   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:18.869432   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:18.920124   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:18.920158   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:18.936137   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:18.936168   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:19.006877   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:19.006902   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:19.006913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:19.088909   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:19.088953   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:21.632734   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:21.647246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:21.647322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:21.685574   68713 cri.go:89] found id: ""
	I0815 18:39:21.685606   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.685614   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:21.685620   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:21.685676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:21.717073   68713 cri.go:89] found id: ""
	I0815 18:39:21.717112   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.717124   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:21.717133   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:21.717205   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:21.752072   68713 cri.go:89] found id: ""
	I0815 18:39:21.752101   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.752112   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:21.752120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:21.752182   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:21.786811   68713 cri.go:89] found id: ""
	I0815 18:39:21.786834   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.786842   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:21.786848   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:21.786893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:21.823694   68713 cri.go:89] found id: ""
	I0815 18:39:21.823719   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.823728   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:21.823733   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:21.823790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:21.859358   68713 cri.go:89] found id: ""
	I0815 18:39:21.859387   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.859398   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:21.859405   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:21.859469   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:21.893723   68713 cri.go:89] found id: ""
	I0815 18:39:21.893751   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.893761   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:21.893769   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:21.893826   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:21.929338   68713 cri.go:89] found id: ""
	I0815 18:39:21.929368   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.929379   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:21.929388   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:21.929414   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:21.979107   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:21.979141   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:21.993968   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:21.994005   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:22.063359   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:22.063384   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:22.063398   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:22.144303   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:22.144337   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:21.348202   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.349199   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:21.154244   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.155954   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:21.658572   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.658782   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:25.658946   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:24.688104   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:24.701230   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:24.701298   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:24.735056   68713 cri.go:89] found id: ""
	I0815 18:39:24.735086   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.735097   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:24.735104   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:24.735172   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:24.769704   68713 cri.go:89] found id: ""
	I0815 18:39:24.769732   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.769743   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:24.769751   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:24.769812   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:24.808763   68713 cri.go:89] found id: ""
	I0815 18:39:24.808788   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.808796   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:24.808807   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:24.808856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:24.846997   68713 cri.go:89] found id: ""
	I0815 18:39:24.847028   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.847038   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:24.847045   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:24.847106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:24.881681   68713 cri.go:89] found id: ""
	I0815 18:39:24.881705   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.881713   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:24.881719   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:24.881772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:24.917000   68713 cri.go:89] found id: ""
	I0815 18:39:24.917024   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.917032   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:24.917040   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:24.917088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:24.951133   68713 cri.go:89] found id: ""
	I0815 18:39:24.951156   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.951164   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:24.951170   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:24.951218   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:24.987306   68713 cri.go:89] found id: ""
	I0815 18:39:24.987331   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.987339   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:24.987347   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:24.987360   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:25.039533   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:25.039566   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:25.053011   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:25.053036   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:25.125864   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:25.125884   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:25.125895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:25.209885   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:25.209916   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:27.751681   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:27.765316   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:27.765390   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:25.848840   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.849344   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:25.156068   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.654722   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:28.158317   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:30.158632   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.805820   68713 cri.go:89] found id: ""
	I0815 18:39:27.805858   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.805870   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:27.805878   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:27.805940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:27.846684   68713 cri.go:89] found id: ""
	I0815 18:39:27.846717   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.846727   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:27.846737   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:27.846801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:27.882326   68713 cri.go:89] found id: ""
	I0815 18:39:27.882358   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.882370   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:27.882378   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:27.882448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:27.917340   68713 cri.go:89] found id: ""
	I0815 18:39:27.917416   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.917431   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:27.917442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:27.917505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:27.952674   68713 cri.go:89] found id: ""
	I0815 18:39:27.952700   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.952708   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:27.952714   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:27.952763   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:27.986103   68713 cri.go:89] found id: ""
	I0815 18:39:27.986132   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.986143   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:27.986151   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:27.986212   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:28.023674   68713 cri.go:89] found id: ""
	I0815 18:39:28.023716   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.023735   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:28.023742   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:28.023807   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:28.064902   68713 cri.go:89] found id: ""
	I0815 18:39:28.064929   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.064937   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:28.064945   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:28.064957   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:28.116145   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:28.116180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:28.130435   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:28.130462   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:28.204899   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:28.204920   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:28.204931   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:28.284165   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:28.284202   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:30.824135   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:30.837515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:30.837583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:30.874671   68713 cri.go:89] found id: ""
	I0815 18:39:30.874695   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.874705   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:30.874712   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:30.874776   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:30.909990   68713 cri.go:89] found id: ""
	I0815 18:39:30.910027   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.910038   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:30.910045   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:30.910106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:30.946824   68713 cri.go:89] found id: ""
	I0815 18:39:30.946851   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.946859   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:30.946865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:30.946912   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:30.983392   68713 cri.go:89] found id: ""
	I0815 18:39:30.983419   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.983429   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:30.983437   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:30.983505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:31.023471   68713 cri.go:89] found id: ""
	I0815 18:39:31.023500   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.023510   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:31.023518   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:31.023583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:31.063586   68713 cri.go:89] found id: ""
	I0815 18:39:31.063616   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.063627   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:31.063636   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:31.063696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:31.103147   68713 cri.go:89] found id: ""
	I0815 18:39:31.103173   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.103180   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:31.103186   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:31.103237   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:31.144082   68713 cri.go:89] found id: ""
	I0815 18:39:31.144113   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.144124   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:31.144136   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:31.144150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:31.212535   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:31.212563   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:31.212586   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:31.292039   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:31.292076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:31.335023   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:31.335050   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:31.388817   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:31.388853   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:30.349110   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.349209   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:30.154683   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.653806   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:34.654716   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.658245   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:34.659119   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:33.925861   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:33.939604   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:33.939668   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:33.974538   68713 cri.go:89] found id: ""
	I0815 18:39:33.974563   68713 logs.go:276] 0 containers: []
	W0815 18:39:33.974571   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:33.974577   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:33.974634   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:34.009017   68713 cri.go:89] found id: ""
	I0815 18:39:34.009048   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.009058   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:34.009064   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:34.009120   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:34.049478   68713 cri.go:89] found id: ""
	I0815 18:39:34.049501   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.049517   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:34.049523   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:34.049576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:34.091011   68713 cri.go:89] found id: ""
	I0815 18:39:34.091040   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.091050   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:34.091056   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:34.091114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:34.126617   68713 cri.go:89] found id: ""
	I0815 18:39:34.126640   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.126650   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:34.126657   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:34.126706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:34.168140   68713 cri.go:89] found id: ""
	I0815 18:39:34.168169   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.168179   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:34.168187   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:34.168279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:34.205052   68713 cri.go:89] found id: ""
	I0815 18:39:34.205081   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.205091   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:34.205098   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:34.205173   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:34.238474   68713 cri.go:89] found id: ""
	I0815 18:39:34.238499   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.238506   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:34.238521   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:34.238540   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:34.280574   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:34.280601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:34.332662   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:34.332704   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:34.348556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:34.348591   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:34.421428   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:34.421450   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:34.421464   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.004855   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:37.019306   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:37.019378   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:37.057588   68713 cri.go:89] found id: ""
	I0815 18:39:37.057618   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.057626   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:37.057641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:37.057706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:37.095645   68713 cri.go:89] found id: ""
	I0815 18:39:37.095678   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.095687   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:37.095693   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:37.095750   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:37.131669   68713 cri.go:89] found id: ""
	I0815 18:39:37.131696   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.131711   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:37.131717   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:37.131772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:37.185065   68713 cri.go:89] found id: ""
	I0815 18:39:37.185097   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.185108   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:37.185115   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:37.185180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:37.220220   68713 cri.go:89] found id: ""
	I0815 18:39:37.220251   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.220262   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:37.220269   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:37.220322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:37.259816   68713 cri.go:89] found id: ""
	I0815 18:39:37.259849   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.259859   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:37.259868   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:37.259920   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:37.292777   68713 cri.go:89] found id: ""
	I0815 18:39:37.292807   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.292818   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:37.292825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:37.292888   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:37.328673   68713 cri.go:89] found id: ""
	I0815 18:39:37.328703   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.328714   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:37.328725   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:37.328740   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:37.379131   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:37.379172   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:37.392982   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:37.393017   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:37.470727   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:37.470750   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:37.470766   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.552353   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:37.552386   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:34.849108   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:37.349765   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:36.655101   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:39.154433   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:37.158746   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:39.658907   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:40.094008   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:40.107681   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:40.107753   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:40.142229   68713 cri.go:89] found id: ""
	I0815 18:39:40.142254   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.142264   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:40.142271   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:40.142333   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:40.180622   68713 cri.go:89] found id: ""
	I0815 18:39:40.180650   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.180665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:40.180672   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:40.180733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:40.219085   68713 cri.go:89] found id: ""
	I0815 18:39:40.219113   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.219120   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:40.219126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:40.219174   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:40.254807   68713 cri.go:89] found id: ""
	I0815 18:39:40.254838   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.254850   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:40.254858   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:40.254940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:40.290438   68713 cri.go:89] found id: ""
	I0815 18:39:40.290466   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.290478   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:40.290484   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:40.290547   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:40.326320   68713 cri.go:89] found id: ""
	I0815 18:39:40.326356   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.326364   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:40.326370   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:40.326429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:40.361538   68713 cri.go:89] found id: ""
	I0815 18:39:40.361563   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.361570   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:40.361576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:40.361629   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:40.397275   68713 cri.go:89] found id: ""
	I0815 18:39:40.397304   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.397316   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:40.397326   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:40.397342   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:40.466042   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:40.466064   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:40.466078   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:40.544915   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:40.544951   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:40.584992   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:40.585019   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:40.634792   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:40.634837   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:39.848609   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:41.849831   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:41.655153   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:43.655372   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:42.159650   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:44.658547   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:43.149819   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:43.164578   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:43.164649   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:43.199576   68713 cri.go:89] found id: ""
	I0815 18:39:43.199621   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.199633   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:43.199641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:43.199702   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:43.233783   68713 cri.go:89] found id: ""
	I0815 18:39:43.233820   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.233833   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:43.233840   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:43.233911   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:43.269406   68713 cri.go:89] found id: ""
	I0815 18:39:43.269437   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.269449   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:43.269457   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:43.269570   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:43.310686   68713 cri.go:89] found id: ""
	I0815 18:39:43.310715   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.310726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:43.310734   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:43.310795   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:43.348662   68713 cri.go:89] found id: ""
	I0815 18:39:43.348689   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.348699   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:43.348706   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:43.348767   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:43.385676   68713 cri.go:89] found id: ""
	I0815 18:39:43.385714   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.385726   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:43.385737   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:43.385802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:43.422605   68713 cri.go:89] found id: ""
	I0815 18:39:43.422634   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.422645   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:43.422653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:43.422712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:43.463208   68713 cri.go:89] found id: ""
	I0815 18:39:43.463238   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.463249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:43.463260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:43.463278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:43.476637   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:43.476664   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:43.552239   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:43.552263   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:43.552278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:43.653055   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:43.653108   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:43.699166   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:43.699192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.251725   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:46.265164   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:46.265240   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:46.305095   68713 cri.go:89] found id: ""
	I0815 18:39:46.305123   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.305133   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:46.305140   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:46.305196   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:46.349744   68713 cri.go:89] found id: ""
	I0815 18:39:46.349773   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.349783   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:46.349790   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:46.349858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:46.385807   68713 cri.go:89] found id: ""
	I0815 18:39:46.385839   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.385847   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:46.385853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:46.385908   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:46.419977   68713 cri.go:89] found id: ""
	I0815 18:39:46.420011   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.420024   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:46.420031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:46.420093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:46.454852   68713 cri.go:89] found id: ""
	I0815 18:39:46.454883   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.454894   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:46.454901   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:46.454962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:46.497157   68713 cri.go:89] found id: ""
	I0815 18:39:46.497192   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.497202   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:46.497210   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:46.497271   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:46.530191   68713 cri.go:89] found id: ""
	I0815 18:39:46.530218   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.530226   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:46.530232   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:46.530282   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:46.566024   68713 cri.go:89] found id: ""
	I0815 18:39:46.566050   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.566063   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:46.566074   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:46.566103   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.621969   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:46.622004   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:46.636576   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:46.636603   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:46.706819   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:46.706842   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:46.706857   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:46.786589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:46.786634   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:44.352685   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:46.849090   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:48.849424   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:45.655900   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:48.154862   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:46.658710   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:49.157317   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:49.324853   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:49.343543   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:49.343618   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:49.396260   68713 cri.go:89] found id: ""
	I0815 18:39:49.396292   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.396303   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:49.396311   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:49.396380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:49.437579   68713 cri.go:89] found id: ""
	I0815 18:39:49.437604   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.437612   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:49.437617   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:49.437663   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:49.476206   68713 cri.go:89] found id: ""
	I0815 18:39:49.476232   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.476239   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:49.476245   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:49.476296   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:49.511324   68713 cri.go:89] found id: ""
	I0815 18:39:49.511349   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.511357   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:49.511363   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:49.511428   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:49.545875   68713 cri.go:89] found id: ""
	I0815 18:39:49.545907   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.545916   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:49.545922   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:49.545981   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:49.582176   68713 cri.go:89] found id: ""
	I0815 18:39:49.582204   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.582228   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:49.582246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:49.582309   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:49.623288   68713 cri.go:89] found id: ""
	I0815 18:39:49.623318   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.623327   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:49.623333   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:49.623394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:49.662352   68713 cri.go:89] found id: ""
	I0815 18:39:49.662377   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.662389   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:49.662399   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:49.662424   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:49.745582   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:49.745617   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:49.785256   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:49.785295   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:49.835944   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:49.835979   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:49.852859   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:49.852886   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:49.928427   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.429223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:52.442384   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:52.442460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:52.480515   68713 cri.go:89] found id: ""
	I0815 18:39:52.480543   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.480553   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:52.480558   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:52.480605   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:52.518346   68713 cri.go:89] found id: ""
	I0815 18:39:52.518382   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.518393   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:52.518401   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:52.518460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:52.557696   68713 cri.go:89] found id: ""
	I0815 18:39:52.557722   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.557731   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:52.557736   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:52.557786   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:52.590849   68713 cri.go:89] found id: ""
	I0815 18:39:52.590879   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.590890   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:52.590898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:52.590961   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:52.629950   68713 cri.go:89] found id: ""
	I0815 18:39:52.629980   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.629992   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:52.629999   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:52.630047   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:52.666039   68713 cri.go:89] found id: ""
	I0815 18:39:52.666070   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.666081   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:52.666089   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:52.666146   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:52.699917   68713 cri.go:89] found id: ""
	I0815 18:39:52.699941   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.699949   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:52.699955   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:52.700001   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:52.735944   68713 cri.go:89] found id: ""
	I0815 18:39:52.735973   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.735981   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:52.735989   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:52.736001   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:39:50.849633   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:52.850298   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:50.155118   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:52.155166   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:54.653844   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:51.159401   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:53.658513   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	W0815 18:39:52.805519   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.805537   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:52.805559   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:52.894175   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:52.894213   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:52.932974   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:52.933006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:52.984206   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:52.984244   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.498477   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:55.511319   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:55.511380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:55.544899   68713 cri.go:89] found id: ""
	I0815 18:39:55.544928   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.544936   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:55.544943   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:55.545003   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:55.578821   68713 cri.go:89] found id: ""
	I0815 18:39:55.578855   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.578864   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:55.578869   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:55.578922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:55.615392   68713 cri.go:89] found id: ""
	I0815 18:39:55.615422   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.615434   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:55.615441   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:55.615501   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:55.653456   68713 cri.go:89] found id: ""
	I0815 18:39:55.653482   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.653493   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:55.653500   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:55.653558   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:55.687716   68713 cri.go:89] found id: ""
	I0815 18:39:55.687741   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.687749   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:55.687755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:55.687802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:55.725518   68713 cri.go:89] found id: ""
	I0815 18:39:55.725543   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.725553   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:55.725561   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:55.725631   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:55.758451   68713 cri.go:89] found id: ""
	I0815 18:39:55.758479   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.758490   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:55.758498   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:55.758560   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:55.792653   68713 cri.go:89] found id: ""
	I0815 18:39:55.792680   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.792687   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:55.792699   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:55.792710   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:55.832127   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:55.832156   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:55.885255   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:55.885289   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.898980   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:55.899009   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:55.967579   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:55.967609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:55.967624   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:55.348998   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:57.349656   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:56.654840   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.655471   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:56.158348   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.658194   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:00.658852   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.543524   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:58.556338   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:58.556412   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:58.593359   68713 cri.go:89] found id: ""
	I0815 18:39:58.593390   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.593401   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:58.593409   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:58.593472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:58.628446   68713 cri.go:89] found id: ""
	I0815 18:39:58.628471   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.628481   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:58.628504   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:58.628567   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:58.663930   68713 cri.go:89] found id: ""
	I0815 18:39:58.663954   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.663964   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:58.663971   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:58.664028   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:58.701070   68713 cri.go:89] found id: ""
	I0815 18:39:58.701095   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.701103   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:58.701108   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:58.701156   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:58.734427   68713 cri.go:89] found id: ""
	I0815 18:39:58.734457   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.734468   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:58.734476   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:58.734543   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:58.769121   68713 cri.go:89] found id: ""
	I0815 18:39:58.769144   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.769152   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:58.769162   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:58.769215   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:58.805771   68713 cri.go:89] found id: ""
	I0815 18:39:58.805796   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.805803   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:58.805808   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:58.805856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:58.840288   68713 cri.go:89] found id: ""
	I0815 18:39:58.840315   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.840325   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:58.840336   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:58.840351   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:58.895856   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:58.895893   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:58.909453   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:58.909478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:58.975939   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:58.975960   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:58.975971   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:59.055318   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:59.055353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.595588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:01.608625   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:01.608690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:01.646105   68713 cri.go:89] found id: ""
	I0815 18:40:01.646133   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.646144   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:01.646151   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:01.646214   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:01.685162   68713 cri.go:89] found id: ""
	I0815 18:40:01.685192   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.685202   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:01.685210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:01.685261   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:01.721452   68713 cri.go:89] found id: ""
	I0815 18:40:01.721479   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.721499   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:01.721507   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:01.721576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:01.762288   68713 cri.go:89] found id: ""
	I0815 18:40:01.762318   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.762331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:01.762339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:01.762429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:01.800547   68713 cri.go:89] found id: ""
	I0815 18:40:01.800579   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.800590   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:01.800598   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:01.800660   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:01.839182   68713 cri.go:89] found id: ""
	I0815 18:40:01.839214   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.839223   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:01.839229   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:01.839294   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:01.875364   68713 cri.go:89] found id: ""
	I0815 18:40:01.875390   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.875398   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:01.875404   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:01.875452   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:01.910485   68713 cri.go:89] found id: ""
	I0815 18:40:01.910512   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.910521   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:01.910535   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:01.910547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.951970   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:01.951998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:02.005720   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:02.005764   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:02.020941   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:02.020969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:02.101206   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:02.101224   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:02.101236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:59.850909   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:02.349180   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:00.659366   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:03.153614   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:03.158375   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:05.159868   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:04.687482   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:04.701501   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:04.701562   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:04.739613   68713 cri.go:89] found id: ""
	I0815 18:40:04.739636   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.739644   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:04.739650   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:04.739704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:04.774419   68713 cri.go:89] found id: ""
	I0815 18:40:04.774443   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.774453   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:04.774460   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:04.774522   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:04.809516   68713 cri.go:89] found id: ""
	I0815 18:40:04.809538   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.809547   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:04.809552   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:04.809612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:04.843822   68713 cri.go:89] found id: ""
	I0815 18:40:04.843850   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.843870   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:04.843878   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:04.843942   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:04.883853   68713 cri.go:89] found id: ""
	I0815 18:40:04.883881   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.883892   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:04.883900   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:04.883962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:04.918811   68713 cri.go:89] found id: ""
	I0815 18:40:04.918838   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.918846   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:04.918852   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:04.918903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:04.953076   68713 cri.go:89] found id: ""
	I0815 18:40:04.953101   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.953110   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:04.953116   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:04.953163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:04.988219   68713 cri.go:89] found id: ""
	I0815 18:40:04.988246   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.988255   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:04.988264   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:04.988275   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:05.060859   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:05.060896   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:05.060913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:05.146768   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:05.146817   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:05.187816   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:05.187845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:05.239027   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:05.239067   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:07.754503   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:07.769608   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:07.769695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:04.849108   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:06.850409   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:05.155042   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.654547   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:09.654825   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.658972   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:10.159255   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.804435   68713 cri.go:89] found id: ""
	I0815 18:40:07.804460   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.804468   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:07.804474   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:07.804551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:07.839760   68713 cri.go:89] found id: ""
	I0815 18:40:07.839787   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.839797   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:07.839804   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:07.839868   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:07.877984   68713 cri.go:89] found id: ""
	I0815 18:40:07.878009   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.878017   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:07.878022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:07.878070   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:07.914294   68713 cri.go:89] found id: ""
	I0815 18:40:07.914319   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.914328   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:07.914336   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:07.914395   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:07.948751   68713 cri.go:89] found id: ""
	I0815 18:40:07.948777   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.948787   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:07.948795   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:07.948861   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:07.982262   68713 cri.go:89] found id: ""
	I0815 18:40:07.982288   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.982296   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:07.982302   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:07.982358   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:08.015560   68713 cri.go:89] found id: ""
	I0815 18:40:08.015588   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.015596   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:08.015602   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:08.015662   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:08.049854   68713 cri.go:89] found id: ""
	I0815 18:40:08.049878   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.049885   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:08.049893   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:08.049905   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:08.102269   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:08.102303   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:08.117181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:08.117209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:08.188586   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:08.188609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:08.188623   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:08.272204   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:08.272239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:10.813223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:10.826181   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:10.826257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:10.863728   68713 cri.go:89] found id: ""
	I0815 18:40:10.863753   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.863761   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:10.863766   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:10.863813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:10.898074   68713 cri.go:89] found id: ""
	I0815 18:40:10.898102   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.898113   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:10.898121   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:10.898183   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:10.933948   68713 cri.go:89] found id: ""
	I0815 18:40:10.933980   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.933991   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:10.933998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:10.934059   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:10.972402   68713 cri.go:89] found id: ""
	I0815 18:40:10.972428   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.972436   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:10.972442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:10.972509   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:11.006814   68713 cri.go:89] found id: ""
	I0815 18:40:11.006843   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.006851   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:11.006857   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:11.006909   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:11.042739   68713 cri.go:89] found id: ""
	I0815 18:40:11.042763   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.042771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:11.042777   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:11.042835   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:11.079132   68713 cri.go:89] found id: ""
	I0815 18:40:11.079164   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.079173   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:11.079179   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:11.079228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:11.113271   68713 cri.go:89] found id: ""
	I0815 18:40:11.113298   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.113309   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:11.113317   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:11.113328   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:11.166669   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:11.166698   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:11.180789   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:11.180815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:11.247954   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:11.247985   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:11.247999   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:11.331952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:11.331995   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:09.349194   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:11.349627   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.850439   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:11.655088   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.656674   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:12.658287   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:15.158361   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.874466   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:13.888346   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:13.888416   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:13.922542   68713 cri.go:89] found id: ""
	I0815 18:40:13.922569   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.922579   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:13.922586   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:13.922654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:13.958039   68713 cri.go:89] found id: ""
	I0815 18:40:13.958066   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.958076   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:13.958082   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:13.958131   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:13.994095   68713 cri.go:89] found id: ""
	I0815 18:40:13.994125   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.994136   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:13.994144   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:13.994195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:14.027918   68713 cri.go:89] found id: ""
	I0815 18:40:14.027949   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.027960   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:14.027969   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:14.028027   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:14.063849   68713 cri.go:89] found id: ""
	I0815 18:40:14.063879   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.063889   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:14.063897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:14.063957   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:14.098444   68713 cri.go:89] found id: ""
	I0815 18:40:14.098473   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.098483   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:14.098490   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:14.098553   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:14.136834   68713 cri.go:89] found id: ""
	I0815 18:40:14.136861   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.136874   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:14.136880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:14.136925   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:14.172377   68713 cri.go:89] found id: ""
	I0815 18:40:14.172400   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.172408   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:14.172415   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:14.172430   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:14.212212   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:14.212242   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:14.268412   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:14.268450   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:14.282978   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:14.283006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:14.352777   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:14.352796   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:14.352822   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:16.939906   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:16.953118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:16.953178   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:16.991697   68713 cri.go:89] found id: ""
	I0815 18:40:16.991723   68713 logs.go:276] 0 containers: []
	W0815 18:40:16.991731   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:16.991736   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:16.991801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:17.027572   68713 cri.go:89] found id: ""
	I0815 18:40:17.027602   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.027613   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:17.027623   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:17.027682   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:17.060718   68713 cri.go:89] found id: ""
	I0815 18:40:17.060750   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.060763   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:17.060771   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:17.060829   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:17.096746   68713 cri.go:89] found id: ""
	I0815 18:40:17.096771   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.096780   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:17.096786   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:17.096846   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:17.130755   68713 cri.go:89] found id: ""
	I0815 18:40:17.130791   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.130802   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:17.130810   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:17.130872   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:17.167991   68713 cri.go:89] found id: ""
	I0815 18:40:17.168016   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.168026   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:17.168034   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:17.168093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:17.200695   68713 cri.go:89] found id: ""
	I0815 18:40:17.200722   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.200733   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:17.200741   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:17.200799   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:17.237788   68713 cri.go:89] found id: ""
	I0815 18:40:17.237816   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.237824   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:17.237833   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:17.237848   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:17.288888   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:17.288921   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:17.302862   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:17.302903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:17.370062   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:17.370085   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:17.370100   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:17.444742   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:17.444781   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:16.349749   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:18.849197   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:16.155555   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:18.654875   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:17.160009   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:19.657774   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:19.984813   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:19.998010   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:19.998077   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:20.032880   68713 cri.go:89] found id: ""
	I0815 18:40:20.032903   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.032912   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:20.032918   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:20.032973   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:20.069191   68713 cri.go:89] found id: ""
	I0815 18:40:20.069224   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.069236   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:20.069243   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:20.069301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:20.101930   68713 cri.go:89] found id: ""
	I0815 18:40:20.101954   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.101962   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:20.101968   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:20.102016   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:20.136981   68713 cri.go:89] found id: ""
	I0815 18:40:20.137006   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.137014   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:20.137020   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:20.137066   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:20.174517   68713 cri.go:89] found id: ""
	I0815 18:40:20.174543   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.174550   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:20.174556   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:20.174611   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:20.208525   68713 cri.go:89] found id: ""
	I0815 18:40:20.208549   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.208559   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:20.208567   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:20.208626   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:20.240824   68713 cri.go:89] found id: ""
	I0815 18:40:20.240855   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.240867   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:20.240874   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:20.240946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:20.277683   68713 cri.go:89] found id: ""
	I0815 18:40:20.277710   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.277720   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:20.277728   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:20.277739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:20.324271   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:20.324304   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:20.376250   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:20.376285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:20.392777   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:20.392813   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:20.464122   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:20.464156   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:20.464180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:20.849461   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:22.849591   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:20.654982   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.154537   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:21.658354   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.658505   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.041684   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:23.055779   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:23.055858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:23.095391   68713 cri.go:89] found id: ""
	I0815 18:40:23.095414   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.095426   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:23.095432   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:23.095483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:23.134907   68713 cri.go:89] found id: ""
	I0815 18:40:23.134936   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.134943   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:23.134949   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:23.134994   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:23.171806   68713 cri.go:89] found id: ""
	I0815 18:40:23.171845   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.171854   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:23.171861   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:23.171924   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:23.205378   68713 cri.go:89] found id: ""
	I0815 18:40:23.205404   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.205412   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:23.205417   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:23.205467   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:23.239503   68713 cri.go:89] found id: ""
	I0815 18:40:23.239531   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.239540   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:23.239547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:23.239614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:23.275802   68713 cri.go:89] found id: ""
	I0815 18:40:23.275828   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.275842   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:23.275849   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:23.275894   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:23.310127   68713 cri.go:89] found id: ""
	I0815 18:40:23.310154   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.310167   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:23.310173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:23.310219   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:23.344646   68713 cri.go:89] found id: ""
	I0815 18:40:23.344674   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.344685   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:23.344696   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:23.344711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:23.397260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:23.397310   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:23.425518   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:23.425553   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:23.495528   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:23.495547   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:23.495562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:23.574489   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:23.574524   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.119044   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:26.133806   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:26.133880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:26.175683   68713 cri.go:89] found id: ""
	I0815 18:40:26.175711   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.175722   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:26.175730   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:26.175789   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:26.210634   68713 cri.go:89] found id: ""
	I0815 18:40:26.210658   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.210665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:26.210671   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:26.210724   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:26.244146   68713 cri.go:89] found id: ""
	I0815 18:40:26.244176   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.244187   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:26.244195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:26.244274   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:26.277312   68713 cri.go:89] found id: ""
	I0815 18:40:26.277335   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.277343   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:26.277349   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:26.277410   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:26.311538   68713 cri.go:89] found id: ""
	I0815 18:40:26.311562   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.311570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:26.311576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:26.311623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:26.347816   68713 cri.go:89] found id: ""
	I0815 18:40:26.347840   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.347847   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:26.347853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:26.347906   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:26.381211   68713 cri.go:89] found id: ""
	I0815 18:40:26.381234   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.381242   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:26.381248   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:26.381303   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:26.413982   68713 cri.go:89] found id: ""
	I0815 18:40:26.414010   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.414018   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:26.414027   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:26.414038   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:26.500686   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:26.500721   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.537615   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:26.537642   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:26.590119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:26.590150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:26.603713   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:26.603739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:26.675455   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:25.349400   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:27.853388   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:25.155463   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:27.155580   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:29.156973   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:26.158898   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:28.658576   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:29.176084   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:29.189743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:29.189813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:29.225500   68713 cri.go:89] found id: ""
	I0815 18:40:29.225536   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.225548   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:29.225557   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:29.225614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:29.261839   68713 cri.go:89] found id: ""
	I0815 18:40:29.261866   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.261877   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:29.261884   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:29.261946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:29.296685   68713 cri.go:89] found id: ""
	I0815 18:40:29.296708   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.296716   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:29.296728   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:29.296787   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:29.332524   68713 cri.go:89] found id: ""
	I0815 18:40:29.332550   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.332558   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:29.332564   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:29.332615   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:29.368918   68713 cri.go:89] found id: ""
	I0815 18:40:29.368943   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.368953   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:29.368961   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:29.369020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:29.403175   68713 cri.go:89] found id: ""
	I0815 18:40:29.403200   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.403211   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:29.403218   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:29.403279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:29.438957   68713 cri.go:89] found id: ""
	I0815 18:40:29.438981   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.438989   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:29.438994   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:29.439051   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:29.472153   68713 cri.go:89] found id: ""
	I0815 18:40:29.472184   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.472195   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:29.472206   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:29.472221   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:29.560484   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:29.560547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:29.600366   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:29.600402   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:29.656536   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:29.656569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:29.669899   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:29.669925   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:29.738515   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:32.239207   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:32.253976   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:32.254048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:32.290918   68713 cri.go:89] found id: ""
	I0815 18:40:32.290942   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.290951   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:32.290957   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:32.291009   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:32.325567   68713 cri.go:89] found id: ""
	I0815 18:40:32.325596   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.325606   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:32.325613   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:32.325674   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:32.360959   68713 cri.go:89] found id: ""
	I0815 18:40:32.360994   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.361005   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:32.361015   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:32.361090   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:32.398583   68713 cri.go:89] found id: ""
	I0815 18:40:32.398614   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.398625   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:32.398633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:32.398696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:32.432980   68713 cri.go:89] found id: ""
	I0815 18:40:32.433007   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.433017   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:32.433024   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:32.433088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:32.467645   68713 cri.go:89] found id: ""
	I0815 18:40:32.467678   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.467688   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:32.467697   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:32.467757   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:32.504233   68713 cri.go:89] found id: ""
	I0815 18:40:32.504265   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.504275   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:32.504282   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:32.504347   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:32.539127   68713 cri.go:89] found id: ""
	I0815 18:40:32.539160   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.539175   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:32.539186   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:32.539200   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:32.620782   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:32.620818   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:32.660920   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:32.660950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:32.714392   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:32.714425   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:32.727629   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:32.727655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:40:30.349267   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:32.349896   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:31.655451   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:34.154871   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:31.157219   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:33.158733   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:35.158871   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	W0815 18:40:32.801258   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.301393   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:35.315460   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:35.315515   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:35.352266   68713 cri.go:89] found id: ""
	I0815 18:40:35.352287   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.352295   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:35.352301   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:35.352345   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:35.387274   68713 cri.go:89] found id: ""
	I0815 18:40:35.387305   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.387316   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:35.387324   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:35.387386   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:35.422376   68713 cri.go:89] found id: ""
	I0815 18:40:35.422403   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.422413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:35.422419   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:35.422464   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:35.456423   68713 cri.go:89] found id: ""
	I0815 18:40:35.456452   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.456459   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:35.456465   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:35.456544   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:35.494878   68713 cri.go:89] found id: ""
	I0815 18:40:35.494903   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.494912   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:35.494919   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:35.494980   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:35.528027   68713 cri.go:89] found id: ""
	I0815 18:40:35.528051   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.528062   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:35.528069   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:35.528128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:35.568543   68713 cri.go:89] found id: ""
	I0815 18:40:35.568570   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.568580   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:35.568587   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:35.568654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:35.627717   68713 cri.go:89] found id: ""
	I0815 18:40:35.627747   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.627766   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:35.627777   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:35.627792   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:35.691497   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:35.691530   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:35.705062   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:35.705092   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:35.783785   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.783806   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:35.783819   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:35.867282   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:35.867317   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:34.848226   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:36.849242   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.850686   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:36.154981   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.155165   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:37.659017   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:40.158408   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.407940   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:38.421571   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:38.421648   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:38.456551   68713 cri.go:89] found id: ""
	I0815 18:40:38.456586   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.456597   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:38.456604   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:38.456665   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:38.494133   68713 cri.go:89] found id: ""
	I0815 18:40:38.494167   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.494179   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:38.494186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:38.494253   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:38.531566   68713 cri.go:89] found id: ""
	I0815 18:40:38.531599   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.531610   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:38.531617   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:38.531678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:38.567613   68713 cri.go:89] found id: ""
	I0815 18:40:38.567640   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.567652   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:38.567659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:38.567717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:38.603172   68713 cri.go:89] found id: ""
	I0815 18:40:38.603201   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.603212   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:38.603225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:38.603284   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:38.639600   68713 cri.go:89] found id: ""
	I0815 18:40:38.639629   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.639640   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:38.639648   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:38.639710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:38.675780   68713 cri.go:89] found id: ""
	I0815 18:40:38.675811   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.675821   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:38.675828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:38.675885   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:38.708745   68713 cri.go:89] found id: ""
	I0815 18:40:38.708775   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.708786   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:38.708796   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:38.708815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:38.722485   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:38.722514   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:38.793913   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:38.793936   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:38.793950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:38.880706   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:38.880744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:38.919505   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:38.919533   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.472452   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:41.486204   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:41.486264   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:41.520251   68713 cri.go:89] found id: ""
	I0815 18:40:41.520282   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.520294   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:41.520302   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:41.520362   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:41.561294   68713 cri.go:89] found id: ""
	I0815 18:40:41.561325   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.561336   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:41.561343   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:41.561403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:41.595290   68713 cri.go:89] found id: ""
	I0815 18:40:41.595318   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.595326   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:41.595331   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:41.595381   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:41.629706   68713 cri.go:89] found id: ""
	I0815 18:40:41.629736   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.629744   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:41.629750   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:41.629816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:41.671862   68713 cri.go:89] found id: ""
	I0815 18:40:41.671885   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.671893   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:41.671898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:41.671951   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:41.710298   68713 cri.go:89] found id: ""
	I0815 18:40:41.710349   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.710360   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:41.710368   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:41.710425   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:41.745434   68713 cri.go:89] found id: ""
	I0815 18:40:41.745472   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.745487   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:41.745492   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:41.745548   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:41.781038   68713 cri.go:89] found id: ""
	I0815 18:40:41.781073   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.781081   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:41.781088   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:41.781099   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:41.863977   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:41.864023   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:41.907477   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:41.907505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.962921   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:41.962956   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:41.976458   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:41.976505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:42.044372   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:41.349260   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:43.349615   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:40.656633   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:43.154626   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:42.658519   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:44.659640   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:44.544803   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:44.559538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:44.559595   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:44.595471   68713 cri.go:89] found id: ""
	I0815 18:40:44.595501   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.595511   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:44.595518   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:44.595581   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:44.630148   68713 cri.go:89] found id: ""
	I0815 18:40:44.630173   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.630181   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:44.630189   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:44.630245   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:44.666084   68713 cri.go:89] found id: ""
	I0815 18:40:44.666110   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.666119   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:44.666126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:44.666180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:44.700286   68713 cri.go:89] found id: ""
	I0815 18:40:44.700320   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.700331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:44.700339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:44.700394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:44.734115   68713 cri.go:89] found id: ""
	I0815 18:40:44.734143   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.734151   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:44.734157   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:44.734216   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:44.770306   68713 cri.go:89] found id: ""
	I0815 18:40:44.770363   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.770376   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:44.770383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:44.770453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:44.806766   68713 cri.go:89] found id: ""
	I0815 18:40:44.806790   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.806798   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:44.806803   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:44.806865   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:44.843574   68713 cri.go:89] found id: ""
	I0815 18:40:44.843603   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.843613   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:44.843623   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:44.843638   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:44.896119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:44.896148   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:44.909537   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:44.909562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:44.980268   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:44.980290   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:44.980307   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:45.066589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:45.066626   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:47.605934   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:47.620644   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:47.620709   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:47.660939   68713 cri.go:89] found id: ""
	I0815 18:40:47.660960   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.660967   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:47.660973   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:47.661021   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:47.701018   68713 cri.go:89] found id: ""
	I0815 18:40:47.701047   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.701059   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:47.701107   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:47.701177   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:47.739487   68713 cri.go:89] found id: ""
	I0815 18:40:47.739514   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.739523   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:47.739528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:47.739584   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:47.781483   68713 cri.go:89] found id: ""
	I0815 18:40:47.781508   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.781515   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:47.781520   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:47.781571   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:45.850565   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.851368   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:45.156177   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.654437   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.157895   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:49.658101   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.816781   68713 cri.go:89] found id: ""
	I0815 18:40:47.816806   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.816813   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:47.816819   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:47.816875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:47.853951   68713 cri.go:89] found id: ""
	I0815 18:40:47.853976   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.853984   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:47.853990   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:47.854062   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:47.892208   68713 cri.go:89] found id: ""
	I0815 18:40:47.892237   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.892246   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:47.892252   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:47.892311   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:47.926916   68713 cri.go:89] found id: ""
	I0815 18:40:47.926944   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.926965   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:47.926976   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:47.926990   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:48.002907   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:48.002927   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:48.002942   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:48.085727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:48.085762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:48.127192   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:48.127224   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:48.180172   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:48.180208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:50.694573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:50.709411   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:50.709472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:50.750956   68713 cri.go:89] found id: ""
	I0815 18:40:50.750985   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.750994   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:50.751000   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:50.751048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:50.791072   68713 cri.go:89] found id: ""
	I0815 18:40:50.791149   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.791174   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:50.791186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:50.791247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:50.827692   68713 cri.go:89] found id: ""
	I0815 18:40:50.827717   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.827728   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:50.827735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:50.827794   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:50.866587   68713 cri.go:89] found id: ""
	I0815 18:40:50.866616   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.866626   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:50.866633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:50.866692   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:50.907012   68713 cri.go:89] found id: ""
	I0815 18:40:50.907040   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.907047   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:50.907053   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:50.907101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:50.951212   68713 cri.go:89] found id: ""
	I0815 18:40:50.951243   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.951256   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:50.951263   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:50.951316   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:50.989771   68713 cri.go:89] found id: ""
	I0815 18:40:50.989802   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.989812   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:50.989818   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:50.989867   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:51.024423   68713 cri.go:89] found id: ""
	I0815 18:40:51.024454   68713 logs.go:276] 0 containers: []
	W0815 18:40:51.024465   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:51.024475   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:51.024500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:51.076973   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:51.077012   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:51.090963   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:51.090989   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:51.169981   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:51.170005   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:51.170029   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:51.248990   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:51.249040   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:50.349092   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:52.350278   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:50.154517   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:52.148131   68248 pod_ready.go:82] duration metric: took 4m0.000077937s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" ...
	E0815 18:40:52.148161   68248 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 18:40:52.148183   68248 pod_ready.go:39] duration metric: took 4m13.224994468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:40:52.148235   68248 kubeadm.go:597] duration metric: took 4m20.945128985s to restartPrimaryControlPlane
	W0815 18:40:52.148324   68248 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 18:40:52.148376   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:40:51.660289   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:54.157718   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:53.790172   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:53.803752   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:53.803816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:53.843203   68713 cri.go:89] found id: ""
	I0815 18:40:53.843231   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.843246   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:53.843254   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:53.843314   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:53.878975   68713 cri.go:89] found id: ""
	I0815 18:40:53.879000   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.879008   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:53.879013   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:53.879078   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:53.915640   68713 cri.go:89] found id: ""
	I0815 18:40:53.915668   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.915675   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:53.915683   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:53.915746   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:53.956312   68713 cri.go:89] found id: ""
	I0815 18:40:53.956340   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.956356   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:53.956365   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:53.956426   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:53.992276   68713 cri.go:89] found id: ""
	I0815 18:40:53.992304   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.992314   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:53.992322   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:53.992387   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:54.034653   68713 cri.go:89] found id: ""
	I0815 18:40:54.034682   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.034693   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:54.034701   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:54.034761   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:54.072993   68713 cri.go:89] found id: ""
	I0815 18:40:54.073018   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.073027   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:54.073038   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:54.073107   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:54.107414   68713 cri.go:89] found id: ""
	I0815 18:40:54.107446   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.107456   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:54.107466   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:54.107481   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:54.145900   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:54.145928   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:54.197609   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:54.197639   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:54.211384   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:54.211410   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:54.280991   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:54.281018   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:54.281031   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:56.868270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:56.881168   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:56.881248   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:56.915206   68713 cri.go:89] found id: ""
	I0815 18:40:56.915235   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.915243   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:56.915249   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:56.915308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:56.950838   68713 cri.go:89] found id: ""
	I0815 18:40:56.950864   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.950873   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:56.950879   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:56.950937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:56.993625   68713 cri.go:89] found id: ""
	I0815 18:40:56.993649   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.993656   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:56.993662   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:56.993718   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:57.029109   68713 cri.go:89] found id: ""
	I0815 18:40:57.029139   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.029150   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:57.029158   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:57.029213   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:57.063480   68713 cri.go:89] found id: ""
	I0815 18:40:57.063518   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.063530   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:57.063538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:57.063598   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:57.102830   68713 cri.go:89] found id: ""
	I0815 18:40:57.102859   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.102870   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:57.102877   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:57.102938   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:57.137116   68713 cri.go:89] found id: ""
	I0815 18:40:57.137146   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.137159   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:57.137173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:57.137235   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:57.174678   68713 cri.go:89] found id: ""
	I0815 18:40:57.174706   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.174717   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:57.174727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:57.174741   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:57.213270   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:57.213311   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:57.269463   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:57.269500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:57.283891   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:57.283915   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:57.355563   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:57.355589   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:57.355601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:54.849266   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:57.350343   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:56.657843   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:58.658098   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:59.943493   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:59.957225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:59.957285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:59.993113   68713 cri.go:89] found id: ""
	I0815 18:40:59.993142   68713 logs.go:276] 0 containers: []
	W0815 18:40:59.993153   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:59.993167   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:59.993228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:00.033485   68713 cri.go:89] found id: ""
	I0815 18:41:00.033515   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.033525   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:00.033533   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:00.033594   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:00.070808   68713 cri.go:89] found id: ""
	I0815 18:41:00.070830   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.070838   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:00.070844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:00.070893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:00.113043   68713 cri.go:89] found id: ""
	I0815 18:41:00.113067   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.113076   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:00.113082   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:00.113139   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:00.148089   68713 cri.go:89] found id: ""
	I0815 18:41:00.148118   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.148129   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:00.148136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:00.148206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:00.188343   68713 cri.go:89] found id: ""
	I0815 18:41:00.188375   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.188386   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:00.188394   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:00.188448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:00.224287   68713 cri.go:89] found id: ""
	I0815 18:41:00.224312   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.224323   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:00.224337   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:00.224398   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:00.263983   68713 cri.go:89] found id: ""
	I0815 18:41:00.264008   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.264016   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:00.264025   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:00.264037   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:00.278057   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:00.278083   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:00.355112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:00.355133   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:00.355146   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:00.436636   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:00.436672   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:00.474774   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:00.474801   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:59.849797   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:02.349363   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:01.158004   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:03.158380   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:05.658860   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:03.027434   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:03.041422   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:03.041496   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:03.074093   68713 cri.go:89] found id: ""
	I0815 18:41:03.074119   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.074130   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:03.074138   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:03.074198   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:03.111489   68713 cri.go:89] found id: ""
	I0815 18:41:03.111517   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.111529   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:03.111537   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:03.111599   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:03.147716   68713 cri.go:89] found id: ""
	I0815 18:41:03.147747   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.147756   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:03.147762   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:03.147825   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:03.184609   68713 cri.go:89] found id: ""
	I0815 18:41:03.184635   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.184644   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:03.184652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:03.184710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:03.221839   68713 cri.go:89] found id: ""
	I0815 18:41:03.221869   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.221878   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:03.221883   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:03.221935   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:03.262619   68713 cri.go:89] found id: ""
	I0815 18:41:03.262649   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.262661   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:03.262669   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:03.262733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:03.297826   68713 cri.go:89] found id: ""
	I0815 18:41:03.297849   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.297864   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:03.297875   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:03.297922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:03.345046   68713 cri.go:89] found id: ""
	I0815 18:41:03.345074   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.345083   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:03.345095   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:03.345133   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:03.416878   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:03.416905   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:03.416920   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:03.491548   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:03.491583   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:03.533821   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:03.533852   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:03.587749   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:03.587787   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.104002   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:06.118123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:06.118195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:06.156179   68713 cri.go:89] found id: ""
	I0815 18:41:06.156204   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.156213   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:06.156218   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:06.156275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:06.192834   68713 cri.go:89] found id: ""
	I0815 18:41:06.192858   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.192866   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:06.192871   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:06.192918   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:06.228355   68713 cri.go:89] found id: ""
	I0815 18:41:06.228379   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.228387   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:06.228393   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:06.228453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:06.262041   68713 cri.go:89] found id: ""
	I0815 18:41:06.262068   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.262079   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:06.262086   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:06.262152   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:06.303217   68713 cri.go:89] found id: ""
	I0815 18:41:06.303249   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.303261   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:06.303268   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:06.303335   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:06.337180   68713 cri.go:89] found id: ""
	I0815 18:41:06.337208   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.337215   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:06.337222   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:06.337270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:06.375054   68713 cri.go:89] found id: ""
	I0815 18:41:06.375081   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.375088   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:06.375095   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:06.375163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:06.412188   68713 cri.go:89] found id: ""
	I0815 18:41:06.412216   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.412227   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:06.412239   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:06.412255   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.425607   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:06.425633   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:06.500853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:06.500872   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:06.500883   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:06.577297   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:06.577333   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:06.620209   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:06.620239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:04.848677   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:06.849254   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:08.849300   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:08.157734   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:10.157969   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:09.171606   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:09.184197   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:09.184257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:09.217865   68713 cri.go:89] found id: ""
	I0815 18:41:09.217893   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.217904   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:09.217912   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:09.217967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:09.254032   68713 cri.go:89] found id: ""
	I0815 18:41:09.254055   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.254064   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:09.254073   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:09.254128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:09.291772   68713 cri.go:89] found id: ""
	I0815 18:41:09.291798   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.291808   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:09.291816   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:09.291880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:09.326695   68713 cri.go:89] found id: ""
	I0815 18:41:09.326717   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.326726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:09.326731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:09.326791   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:09.365779   68713 cri.go:89] found id: ""
	I0815 18:41:09.365807   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.365818   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:09.365825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:09.365880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:09.413475   68713 cri.go:89] found id: ""
	I0815 18:41:09.413500   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.413509   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:09.413514   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:09.413578   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:09.449483   68713 cri.go:89] found id: ""
	I0815 18:41:09.449511   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.449521   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:09.449528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:09.449623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:09.487484   68713 cri.go:89] found id: ""
	I0815 18:41:09.487513   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.487525   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:09.487535   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:09.487549   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:09.536746   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:09.536777   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:09.549912   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:09.549944   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:09.619192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:09.619227   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:09.619246   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:09.698370   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:09.698404   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:12.240745   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:12.254814   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:12.254875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:12.291346   68713 cri.go:89] found id: ""
	I0815 18:41:12.291376   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.291387   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:12.291395   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:12.291456   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:12.324832   68713 cri.go:89] found id: ""
	I0815 18:41:12.324867   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.324878   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:12.324886   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:12.324950   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:12.360172   68713 cri.go:89] found id: ""
	I0815 18:41:12.360193   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.360201   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:12.360206   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:12.360251   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:12.394671   68713 cri.go:89] found id: ""
	I0815 18:41:12.394700   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.394710   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:12.394731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:12.394800   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:12.428951   68713 cri.go:89] found id: ""
	I0815 18:41:12.428999   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.429007   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:12.429013   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:12.429057   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:12.466035   68713 cri.go:89] found id: ""
	I0815 18:41:12.466061   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.466069   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:12.466075   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:12.466125   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:12.500003   68713 cri.go:89] found id: ""
	I0815 18:41:12.500031   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.500042   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:12.500050   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:12.500105   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:12.537433   68713 cri.go:89] found id: ""
	I0815 18:41:12.537457   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.537464   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:12.537473   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:12.537484   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:12.586768   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:12.586809   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:12.600549   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:12.600578   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:12.673112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:12.673138   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:12.673154   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:12.754689   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:12.754726   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:11.348767   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:13.349973   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:12.158249   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:13.158354   68429 pod_ready.go:82] duration metric: took 4m0.006607137s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	E0815 18:41:13.158373   68429 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 18:41:13.158381   68429 pod_ready.go:39] duration metric: took 4m7.064501997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:13.158395   68429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:13.158423   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:13.158467   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:13.203746   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:13.203771   68429 cri.go:89] found id: ""
	I0815 18:41:13.203779   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:13.203840   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.208188   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:13.208248   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:13.245326   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:13.245351   68429 cri.go:89] found id: ""
	I0815 18:41:13.245359   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:13.245412   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.250212   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:13.250281   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:13.296537   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:13.296565   68429 cri.go:89] found id: ""
	I0815 18:41:13.296576   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:13.296634   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.300823   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:13.300881   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:13.337973   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:13.338018   68429 cri.go:89] found id: ""
	I0815 18:41:13.338031   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:13.338083   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.342251   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:13.342307   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:13.379921   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:13.379948   68429 cri.go:89] found id: ""
	I0815 18:41:13.379957   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:13.380005   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.384451   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:13.384539   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:13.421077   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:13.421113   68429 cri.go:89] found id: ""
	I0815 18:41:13.421122   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:13.421180   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.425566   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:13.425640   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:13.468663   68429 cri.go:89] found id: ""
	I0815 18:41:13.468688   68429 logs.go:276] 0 containers: []
	W0815 18:41:13.468696   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:13.468701   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:13.468753   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:13.506689   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:13.506711   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:13.506715   68429 cri.go:89] found id: ""
	I0815 18:41:13.506723   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:13.506784   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.511177   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.515519   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:13.515543   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:13.583771   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:13.583806   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:13.714906   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:13.714945   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:13.766512   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:13.766548   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:13.818416   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:13.818450   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:13.859035   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:13.859073   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:13.901515   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:13.901546   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:14.437262   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:14.437304   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:14.453511   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:14.453551   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:14.489238   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:14.489267   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:14.540141   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:14.540184   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:14.574758   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:14.574785   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:14.609370   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:14.609398   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:15.294667   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:15.307758   68713 kubeadm.go:597] duration metric: took 4m2.67500099s to restartPrimaryControlPlane
	W0815 18:41:15.307840   68713 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 18:41:15.307872   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:41:15.761255   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:15.776049   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:41:15.786643   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:41:15.796517   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:41:15.796537   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:41:15.796585   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:41:15.806118   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:41:15.806167   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:41:15.816363   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:41:15.826396   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:41:15.826449   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:41:15.836538   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.847035   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:41:15.847093   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.857475   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:41:15.867084   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:41:15.867144   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:41:15.879736   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:41:15.954497   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:41:15.954588   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:41:16.098128   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:41:16.098244   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:41:16.098345   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:41:16.288507   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:41:16.290439   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:41:16.290555   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:41:16.290656   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:41:16.290756   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:41:16.290831   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:41:16.290923   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:41:16.291003   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:41:16.291096   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:41:16.291182   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:41:16.291280   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:41:16.291396   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:41:16.291457   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:41:16.291509   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:41:16.363570   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:41:16.549782   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:41:16.789250   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:41:16.983388   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:41:17.004293   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:41:17.006438   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:41:17.006485   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:41:17.154583   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:41:17.156594   68713 out.go:235]   - Booting up control plane ...
	I0815 18:41:17.156717   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:41:17.177351   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:41:17.179286   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:41:17.180313   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:41:17.183829   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:41:15.850424   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:18.348986   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:18.430273   68248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.281857018s)
	I0815 18:41:18.430359   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:18.445633   68248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:41:18.457459   68248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:41:18.469748   68248 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:41:18.469769   68248 kubeadm.go:157] found existing configuration files:
	
	I0815 18:41:18.469818   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:41:18.480099   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:41:18.480146   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:41:18.491871   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:41:18.501274   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:41:18.501339   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:41:18.510186   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:41:18.518803   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:41:18.518863   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:41:18.527843   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:41:18.536437   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:41:18.536514   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:41:18.545573   68248 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:41:18.596478   68248 kubeadm.go:310] W0815 18:41:18.577134    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:41:18.597311   68248 kubeadm.go:310] W0815 18:41:18.577958    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:41:18.709937   68248 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:41:17.151343   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:17.173653   68429 api_server.go:72] duration metric: took 4m18.293407117s to wait for apiserver process to appear ...
	I0815 18:41:17.173677   68429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:41:17.173724   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:17.173784   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:17.211484   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:17.211509   68429 cri.go:89] found id: ""
	I0815 18:41:17.211518   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:17.211583   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.216011   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:17.216107   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:17.265454   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:17.265486   68429 cri.go:89] found id: ""
	I0815 18:41:17.265497   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:17.265554   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.269804   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:17.269868   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:17.310339   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:17.310363   68429 cri.go:89] found id: ""
	I0815 18:41:17.310371   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:17.310435   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.315639   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:17.315695   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:17.352364   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:17.352387   68429 cri.go:89] found id: ""
	I0815 18:41:17.352396   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:17.352452   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.356782   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:17.356848   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:17.396704   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:17.396733   68429 cri.go:89] found id: ""
	I0815 18:41:17.396744   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:17.396799   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.400920   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:17.400985   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:17.440361   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:17.440390   68429 cri.go:89] found id: ""
	I0815 18:41:17.440400   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:17.440464   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.445057   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:17.445127   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:17.487341   68429 cri.go:89] found id: ""
	I0815 18:41:17.487369   68429 logs.go:276] 0 containers: []
	W0815 18:41:17.487380   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:17.487388   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:17.487446   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:17.528197   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:17.528218   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:17.528223   68429 cri.go:89] found id: ""
	I0815 18:41:17.528229   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:17.528285   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.532536   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.536745   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:17.536768   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:17.574236   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:17.574268   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:17.617822   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:17.617853   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:17.673009   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:17.673037   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:17.717620   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:17.717647   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:17.764641   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:17.764671   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:17.815586   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:17.815618   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:17.855287   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:17.855310   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:17.906486   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:17.906514   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:17.941540   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:17.941562   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:18.373461   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:18.373497   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:18.454203   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:18.454244   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:18.470284   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:18.470315   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:20.349635   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:22.350034   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:21.080947   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:41:21.085334   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0815 18:41:21.086420   68429 api_server.go:141] control plane version: v1.31.0
	I0815 18:41:21.086442   68429 api_server.go:131] duration metric: took 3.912756949s to wait for apiserver health ...
	I0815 18:41:21.086452   68429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:41:21.086481   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:21.086537   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:21.124183   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:21.124210   68429 cri.go:89] found id: ""
	I0815 18:41:21.124218   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:21.124285   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.128402   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:21.128472   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:21.164737   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:21.164768   68429 cri.go:89] found id: ""
	I0815 18:41:21.164779   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:21.164835   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.170622   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:21.170699   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:21.206823   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:21.206847   68429 cri.go:89] found id: ""
	I0815 18:41:21.206855   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:21.206910   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.211055   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:21.211128   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:21.255529   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:21.255555   68429 cri.go:89] found id: ""
	I0815 18:41:21.255565   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:21.255629   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.260062   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:21.260139   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:21.298058   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:21.298116   68429 cri.go:89] found id: ""
	I0815 18:41:21.298124   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:21.298180   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.302821   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:21.302892   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:21.340895   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:21.340925   68429 cri.go:89] found id: ""
	I0815 18:41:21.340936   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:21.341003   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.345545   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:21.345638   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:21.383180   68429 cri.go:89] found id: ""
	I0815 18:41:21.383212   68429 logs.go:276] 0 containers: []
	W0815 18:41:21.383223   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:21.383232   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:21.383301   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:21.421152   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:21.421178   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:21.421185   68429 cri.go:89] found id: ""
	I0815 18:41:21.421198   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:21.421257   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.426326   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.430307   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:21.430351   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:21.562655   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:21.562697   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:21.613436   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:21.613470   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:21.674678   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:21.674721   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:21.717283   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:21.717316   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:21.760218   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:21.760249   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:21.802313   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:21.802352   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:21.874565   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:21.874608   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:21.891629   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:21.891666   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:21.934128   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:21.934170   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:21.985467   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:21.985497   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:22.023731   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:22.023770   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:22.403584   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:22.403626   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:25.005734   68429 system_pods.go:59] 8 kube-system pods found
	I0815 18:41:25.005760   68429 system_pods.go:61] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running
	I0815 18:41:25.005766   68429 system_pods.go:61] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running
	I0815 18:41:25.005770   68429 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running
	I0815 18:41:25.005775   68429 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running
	I0815 18:41:25.005778   68429 system_pods.go:61] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:41:25.005781   68429 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running
	I0815 18:41:25.005788   68429 system_pods.go:61] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:25.005793   68429 system_pods.go:61] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:41:25.005799   68429 system_pods.go:74] duration metric: took 3.919341536s to wait for pod list to return data ...
	I0815 18:41:25.005806   68429 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:41:25.008398   68429 default_sa.go:45] found service account: "default"
	I0815 18:41:25.008419   68429 default_sa.go:55] duration metric: took 2.608281ms for default service account to be created ...
	I0815 18:41:25.008427   68429 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:41:25.012784   68429 system_pods.go:86] 8 kube-system pods found
	I0815 18:41:25.012804   68429 system_pods.go:89] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running
	I0815 18:41:25.012810   68429 system_pods.go:89] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running
	I0815 18:41:25.012817   68429 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running
	I0815 18:41:25.012821   68429 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running
	I0815 18:41:25.012825   68429 system_pods.go:89] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:41:25.012828   68429 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running
	I0815 18:41:25.012834   68429 system_pods.go:89] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:25.012838   68429 system_pods.go:89] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:41:25.012850   68429 system_pods.go:126] duration metric: took 4.415694ms to wait for k8s-apps to be running ...
	I0815 18:41:25.012858   68429 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:41:25.012905   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:25.028245   68429 system_svc.go:56] duration metric: took 15.378403ms WaitForService to wait for kubelet
	I0815 18:41:25.028272   68429 kubeadm.go:582] duration metric: took 4m26.148030358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:41:25.028290   68429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:41:25.030696   68429 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:41:25.030717   68429 node_conditions.go:123] node cpu capacity is 2
	I0815 18:41:25.030728   68429 node_conditions.go:105] duration metric: took 2.43352ms to run NodePressure ...
	I0815 18:41:25.030742   68429 start.go:241] waiting for startup goroutines ...
	I0815 18:41:25.030751   68429 start.go:246] waiting for cluster config update ...
	I0815 18:41:25.030768   68429 start.go:255] writing updated cluster config ...
	I0815 18:41:25.031028   68429 ssh_runner.go:195] Run: rm -f paused
	I0815 18:41:25.077910   68429 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:41:25.079973   68429 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-423062" cluster and "default" namespace by default
	I0815 18:41:27.911884   68248 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 18:41:27.911943   68248 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:41:27.912011   68248 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:41:27.912130   68248 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:41:27.912272   68248 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 18:41:27.912359   68248 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:41:27.913884   68248 out.go:235]   - Generating certificates and keys ...
	I0815 18:41:27.913990   68248 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:41:27.914092   68248 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:41:27.914197   68248 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:41:27.914289   68248 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:41:27.914362   68248 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:41:27.914433   68248 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:41:27.914521   68248 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:41:27.914606   68248 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:41:27.914859   68248 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:41:27.914984   68248 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:41:27.915040   68248 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:41:27.915119   68248 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:41:27.915190   68248 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:41:27.915268   68248 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 18:41:27.915336   68248 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:41:27.915419   68248 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:41:27.915500   68248 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:41:27.915606   68248 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:41:27.915691   68248 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:41:27.917229   68248 out.go:235]   - Booting up control plane ...
	I0815 18:41:27.917311   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:41:27.917377   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:41:27.917433   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:41:27.917521   68248 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:41:27.917590   68248 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:41:27.917623   68248 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:41:27.917740   68248 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 18:41:27.917829   68248 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 18:41:27.917880   68248 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00200618s
	I0815 18:41:27.917954   68248 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 18:41:27.918011   68248 kubeadm.go:310] [api-check] The API server is healthy after 5.501605719s
	I0815 18:41:27.918122   68248 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 18:41:27.918268   68248 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 18:41:27.918361   68248 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 18:41:27.918626   68248 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-555028 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 18:41:27.918723   68248 kubeadm.go:310] [bootstrap-token] Using token: 99xu37.bm6hiisu91f6rbvd
	I0815 18:41:27.920248   68248 out.go:235]   - Configuring RBAC rules ...
	I0815 18:41:27.920360   68248 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 18:41:27.920467   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 18:41:27.920651   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 18:41:27.920785   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 18:41:27.920938   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 18:41:27.921052   68248 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 18:41:27.921225   68248 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 18:41:27.921286   68248 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 18:41:27.921356   68248 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 18:41:27.921369   68248 kubeadm.go:310] 
	I0815 18:41:27.921422   68248 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 18:41:27.921428   68248 kubeadm.go:310] 
	I0815 18:41:27.921488   68248 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 18:41:27.921497   68248 kubeadm.go:310] 
	I0815 18:41:27.921521   68248 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 18:41:27.921570   68248 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 18:41:27.921619   68248 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 18:41:27.921625   68248 kubeadm.go:310] 
	I0815 18:41:27.921698   68248 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 18:41:27.921711   68248 kubeadm.go:310] 
	I0815 18:41:27.921776   68248 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 18:41:27.921787   68248 kubeadm.go:310] 
	I0815 18:41:27.921858   68248 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 18:41:27.921963   68248 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 18:41:27.922055   68248 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 18:41:27.922064   68248 kubeadm.go:310] 
	I0815 18:41:27.922166   68248 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 18:41:27.922281   68248 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 18:41:27.922306   68248 kubeadm.go:310] 
	I0815 18:41:27.922413   68248 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 99xu37.bm6hiisu91f6rbvd \
	I0815 18:41:27.922550   68248 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 \
	I0815 18:41:27.922593   68248 kubeadm.go:310] 	--control-plane 
	I0815 18:41:27.922603   68248 kubeadm.go:310] 
	I0815 18:41:27.922703   68248 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 18:41:27.922712   68248 kubeadm.go:310] 
	I0815 18:41:27.922800   68248 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 99xu37.bm6hiisu91f6rbvd \
	I0815 18:41:27.922901   68248 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 
	I0815 18:41:27.922909   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:41:27.922916   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:41:27.924596   68248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:41:24.849483   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:27.350715   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:27.926142   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:41:27.938307   68248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:41:27.958862   68248 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:41:27.958974   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:27.959032   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-555028 minikube.k8s.io/updated_at=2024_08_15T18_41_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=embed-certs-555028 minikube.k8s.io/primary=true
	I0815 18:41:28.156844   68248 ops.go:34] apiserver oom_adj: -16
	I0815 18:41:28.157122   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:28.657735   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:29.157713   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:29.658109   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:30.157486   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:30.657573   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.157463   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.658073   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.757929   68248 kubeadm.go:1113] duration metric: took 3.799012728s to wait for elevateKubeSystemPrivileges
	I0815 18:41:31.757969   68248 kubeadm.go:394] duration metric: took 5m0.607372858s to StartCluster
	I0815 18:41:31.757992   68248 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:41:31.758070   68248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:41:31.759686   68248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:41:31.759915   68248 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:41:31.759982   68248 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:41:31.760072   68248 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-555028"
	I0815 18:41:31.760090   68248 addons.go:69] Setting default-storageclass=true in profile "embed-certs-555028"
	I0815 18:41:31.760115   68248 addons.go:69] Setting metrics-server=true in profile "embed-certs-555028"
	I0815 18:41:31.760133   68248 addons.go:234] Setting addon metrics-server=true in "embed-certs-555028"
	W0815 18:41:31.760141   68248 addons.go:243] addon metrics-server should already be in state true
	I0815 18:41:31.760148   68248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-555028"
	I0815 18:41:31.760174   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.760110   68248 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-555028"
	W0815 18:41:31.760230   68248 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:41:31.760270   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.760108   68248 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:41:31.760603   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760619   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760637   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.760642   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.760658   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760708   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.761566   68248 out.go:177] * Verifying Kubernetes components...
	I0815 18:41:31.762780   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:41:31.777893   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I0815 18:41:31.778444   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.779021   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.779049   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.779496   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.780129   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.780182   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.780954   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0815 18:41:31.781146   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0815 18:41:31.781506   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.781586   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.782056   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.782061   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.782078   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.782079   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.782437   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.782494   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.782685   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.783004   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.783034   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.786246   68248 addons.go:234] Setting addon default-storageclass=true in "embed-certs-555028"
	W0815 18:41:31.786270   68248 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:41:31.786300   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.786682   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.786714   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.800152   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36619
	I0815 18:41:31.800729   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.801272   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.801295   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.801656   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.801835   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.803539   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0815 18:41:31.803751   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.804058   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.804640   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.804660   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.805007   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.805157   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.806098   68248 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:41:31.806397   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0815 18:41:31.806814   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.807269   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.807450   68248 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:41:31.807466   68248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:41:31.807484   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.807744   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.807757   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.808066   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.808889   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.808923   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.809143   68248 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:41:31.810575   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:41:31.810593   68248 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:41:31.810619   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.810648   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.811760   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.811761   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.811802   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.811948   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.812101   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.812243   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.814211   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.814653   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.814675   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.814953   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.815117   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.815271   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.815391   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.829657   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
	I0815 18:41:31.830122   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.830710   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.830734   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.831077   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.831291   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.833016   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.833271   68248 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:41:31.833285   68248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:41:31.833302   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.836248   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.836655   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.836682   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.836908   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.837097   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.837233   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.837410   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.988466   68248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:41:32.010147   68248 node_ready.go:35] waiting up to 6m0s for node "embed-certs-555028" to be "Ready" ...
	I0815 18:41:32.019505   68248 node_ready.go:49] node "embed-certs-555028" has status "Ready":"True"
	I0815 18:41:32.019529   68248 node_ready.go:38] duration metric: took 9.346825ms for node "embed-certs-555028" to be "Ready" ...
	I0815 18:41:32.019541   68248 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:32.032036   68248 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:32.125991   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:41:32.138532   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:41:32.138554   68248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:41:32.155222   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:41:32.196478   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:41:32.196517   68248 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:41:32.270461   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:41:32.270495   68248 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:41:32.405567   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:41:33.205712   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.050454437s)
	I0815 18:41:33.205772   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.205785   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.205793   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.079759984s)
	I0815 18:41:33.205826   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.205838   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206153   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.206169   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206184   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206194   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206200   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.206205   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206210   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206218   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.206202   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.206226   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206415   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206421   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206430   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206432   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.245033   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.245061   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.245328   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.245343   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.651886   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246273862s)
	I0815 18:41:33.651945   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.651960   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.652264   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.652307   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.652311   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.652326   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.652335   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.652618   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.652640   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.652650   68248 addons.go:475] Verifying addon metrics-server=true in "embed-certs-555028"
	I0815 18:41:33.652697   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.654487   68248 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 18:41:29.848462   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:31.850877   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:33.655868   68248 addons.go:510] duration metric: took 1.89588756s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 18:41:34.044605   68248 pod_ready.go:103] pod "etcd-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:34.538170   68248 pod_ready.go:93] pod "etcd-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.538199   68248 pod_ready.go:82] duration metric: took 2.506135047s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.538212   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.543160   68248 pod_ready.go:93] pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.543182   68248 pod_ready.go:82] duration metric: took 4.962289ms for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.543195   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.547126   68248 pod_ready.go:93] pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.547144   68248 pod_ready.go:82] duration metric: took 3.94279ms for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.547152   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:36.553459   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:37.555276   68248 pod_ready.go:93] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:37.555299   68248 pod_ready.go:82] duration metric: took 3.008140869s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:37.555307   68248 pod_ready.go:39] duration metric: took 5.535754922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:37.555330   68248 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:37.555378   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:37.575318   68248 api_server.go:72] duration metric: took 5.815371975s to wait for apiserver process to appear ...
	I0815 18:41:37.575344   68248 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:41:37.575361   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:41:37.580989   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I0815 18:41:37.582142   68248 api_server.go:141] control plane version: v1.31.0
	I0815 18:41:37.582164   68248 api_server.go:131] duration metric: took 6.812732ms to wait for apiserver health ...
	I0815 18:41:37.582174   68248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:41:37.589334   68248 system_pods.go:59] 9 kube-system pods found
	I0815 18:41:37.589366   68248 system_pods.go:61] "coredns-6f6b679f8f-mf6q4" [a5f7f959-715b-48a1-9f85-f267614182f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:41:37.589377   68248 system_pods.go:61] "coredns-6f6b679f8f-rc947" [3d041322-9d6b-4f46-8f58-e2991f34a297] Running
	I0815 18:41:37.589385   68248 system_pods.go:61] "etcd-embed-certs-555028" [8b533be4-dc0d-4b5e-af13-4efde0ddca33] Running
	I0815 18:41:37.589390   68248 system_pods.go:61] "kube-apiserver-embed-certs-555028" [6cbda2fc-5bf8-42d3-acee-fbf45de39d08] Running
	I0815 18:41:37.589397   68248 system_pods.go:61] "kube-controller-manager-embed-certs-555028" [e1246479-31dd-4437-b32f-4709fa627284] Running
	I0815 18:41:37.589403   68248 system_pods.go:61] "kube-proxy-ktczt" [f5e5b692-edd5-48fd-879b-7b8da4dea9fd] Running
	I0815 18:41:37.589410   68248 system_pods.go:61] "kube-scheduler-embed-certs-555028" [046100d7-8f69-4bff-8d48-c088c27b7601] Running
	I0815 18:41:37.589422   68248 system_pods.go:61] "metrics-server-6867b74b74-zkpx5" [92e18af9-7bd1-4891-b551-06ba4b293560] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:37.589431   68248 system_pods.go:61] "storage-provisioner" [d6979830-492e-4ef7-960f-2d4756de1c8f] Running
	I0815 18:41:37.589439   68248 system_pods.go:74] duration metric: took 7.257758ms to wait for pod list to return data ...
	I0815 18:41:37.589450   68248 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:41:37.592468   68248 default_sa.go:45] found service account: "default"
	I0815 18:41:37.592500   68248 default_sa.go:55] duration metric: took 3.029278ms for default service account to be created ...
	I0815 18:41:37.592511   68248 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:41:37.597697   68248 system_pods.go:86] 9 kube-system pods found
	I0815 18:41:37.597725   68248 system_pods.go:89] "coredns-6f6b679f8f-mf6q4" [a5f7f959-715b-48a1-9f85-f267614182f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:41:37.597730   68248 system_pods.go:89] "coredns-6f6b679f8f-rc947" [3d041322-9d6b-4f46-8f58-e2991f34a297] Running
	I0815 18:41:37.597736   68248 system_pods.go:89] "etcd-embed-certs-555028" [8b533be4-dc0d-4b5e-af13-4efde0ddca33] Running
	I0815 18:41:37.597740   68248 system_pods.go:89] "kube-apiserver-embed-certs-555028" [6cbda2fc-5bf8-42d3-acee-fbf45de39d08] Running
	I0815 18:41:37.597744   68248 system_pods.go:89] "kube-controller-manager-embed-certs-555028" [e1246479-31dd-4437-b32f-4709fa627284] Running
	I0815 18:41:37.597747   68248 system_pods.go:89] "kube-proxy-ktczt" [f5e5b692-edd5-48fd-879b-7b8da4dea9fd] Running
	I0815 18:41:37.597751   68248 system_pods.go:89] "kube-scheduler-embed-certs-555028" [046100d7-8f69-4bff-8d48-c088c27b7601] Running
	I0815 18:41:37.597756   68248 system_pods.go:89] "metrics-server-6867b74b74-zkpx5" [92e18af9-7bd1-4891-b551-06ba4b293560] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:37.597763   68248 system_pods.go:89] "storage-provisioner" [d6979830-492e-4ef7-960f-2d4756de1c8f] Running
	I0815 18:41:37.597769   68248 system_pods.go:126] duration metric: took 5.252997ms to wait for k8s-apps to be running ...
	I0815 18:41:37.597779   68248 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:41:37.597819   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:37.616004   68248 system_svc.go:56] duration metric: took 18.217091ms WaitForService to wait for kubelet
	I0815 18:41:37.616032   68248 kubeadm.go:582] duration metric: took 5.856091444s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:41:37.616049   68248 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:41:37.619195   68248 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:41:37.619215   68248 node_conditions.go:123] node cpu capacity is 2
	I0815 18:41:37.619223   68248 node_conditions.go:105] duration metric: took 3.169759ms to run NodePressure ...
	I0815 18:41:37.619234   68248 start.go:241] waiting for startup goroutines ...
	I0815 18:41:37.619242   68248 start.go:246] waiting for cluster config update ...
	I0815 18:41:37.619253   68248 start.go:255] writing updated cluster config ...
	I0815 18:41:37.619520   68248 ssh_runner.go:195] Run: rm -f paused
	I0815 18:41:37.669469   68248 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:41:37.671485   68248 out.go:177] * Done! kubectl is now configured to use "embed-certs-555028" cluster and "default" namespace by default
	I0815 18:41:34.350702   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:36.849248   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:39.348684   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:41.349379   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:43.848932   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:46.348801   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:48.349736   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:50.848728   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:52.850583   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:57.184855   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:41:57.185437   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:41:57.185667   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:41:54.851200   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:57.349542   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:42:02.186077   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:02.186272   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:41:59.349724   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:59.349748   67936 pod_ready.go:82] duration metric: took 4m0.007281981s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	E0815 18:41:59.349757   67936 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 18:41:59.349763   67936 pod_ready.go:39] duration metric: took 4m1.606987494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:59.349779   67936 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:59.349802   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:59.349844   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:59.395509   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:41:59.395541   67936 cri.go:89] found id: ""
	I0815 18:41:59.395552   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:41:59.395608   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.400063   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:59.400140   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:59.435356   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:41:59.435379   67936 cri.go:89] found id: ""
	I0815 18:41:59.435386   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:41:59.435431   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.440159   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:59.440213   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:59.479810   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:41:59.479841   67936 cri.go:89] found id: ""
	I0815 18:41:59.479851   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:41:59.479907   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.484341   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:59.484394   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:59.521077   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:41:59.521104   67936 cri.go:89] found id: ""
	I0815 18:41:59.521114   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:41:59.521168   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.525075   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:59.525131   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:59.564058   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:41:59.564084   67936 cri.go:89] found id: ""
	I0815 18:41:59.564093   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:41:59.564150   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.568668   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:59.568734   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:59.604385   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:41:59.604406   67936 cri.go:89] found id: ""
	I0815 18:41:59.604416   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:41:59.604473   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.609023   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:59.609095   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:59.646289   67936 cri.go:89] found id: ""
	I0815 18:41:59.646334   67936 logs.go:276] 0 containers: []
	W0815 18:41:59.646346   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:59.646355   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:59.646422   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:59.681861   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:41:59.681889   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:41:59.681895   67936 cri.go:89] found id: ""
	I0815 18:41:59.681903   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:41:59.681951   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.686379   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.690328   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:59.690353   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:59.759302   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:41:59.759338   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:41:59.798249   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:41:59.798276   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:41:59.834097   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:41:59.834129   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:41:59.885365   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:41:59.885398   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:41:59.923013   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:59.923038   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:59.938162   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:59.938192   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:00.077340   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:00.077377   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:00.122292   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:00.122323   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:00.165209   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:00.165235   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:00.201278   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:00.201317   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:00.238007   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:00.238042   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:00.709997   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:00.710043   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:03.252351   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:42:03.268074   67936 api_server.go:72] duration metric: took 4m12.770065297s to wait for apiserver process to appear ...
	I0815 18:42:03.268104   67936 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:42:03.268159   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:42:03.268227   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:42:03.305890   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:03.305913   67936 cri.go:89] found id: ""
	I0815 18:42:03.305923   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:42:03.305981   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.309958   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:42:03.310019   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:42:03.344602   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:03.344630   67936 cri.go:89] found id: ""
	I0815 18:42:03.344639   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:42:03.344696   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.349261   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:42:03.349317   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:42:03.383892   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:03.383912   67936 cri.go:89] found id: ""
	I0815 18:42:03.383919   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:42:03.383968   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.388158   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:42:03.388219   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:42:03.423264   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:03.423293   67936 cri.go:89] found id: ""
	I0815 18:42:03.423303   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:42:03.423352   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.427436   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:42:03.427496   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:42:03.470792   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:03.470819   67936 cri.go:89] found id: ""
	I0815 18:42:03.470829   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:42:03.470890   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.475884   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:42:03.475956   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:42:03.513081   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:03.513103   67936 cri.go:89] found id: ""
	I0815 18:42:03.513110   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:42:03.513161   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.517913   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:42:03.517985   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:42:03.556149   67936 cri.go:89] found id: ""
	I0815 18:42:03.556180   67936 logs.go:276] 0 containers: []
	W0815 18:42:03.556191   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:42:03.556199   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:42:03.556257   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:42:03.595987   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:03.596015   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:03.596021   67936 cri.go:89] found id: ""
	I0815 18:42:03.596030   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:42:03.596112   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.600430   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.604422   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:42:03.604443   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:42:03.676629   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:42:03.676665   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:03.717487   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:03.717514   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:03.755606   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:42:03.755632   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:03.815152   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:03.815187   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:03.857853   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:03.857882   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:04.296939   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:42:04.296983   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:42:04.312346   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:42:04.312373   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:04.424132   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:04.424162   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:04.482298   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:04.482326   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:04.526805   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:42:04.526832   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:04.564842   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:42:04.564871   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:04.602297   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:04.602323   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.137972   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:42:07.143165   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 200:
	ok
	I0815 18:42:07.144155   67936 api_server.go:141] control plane version: v1.31.0
	I0815 18:42:07.144174   67936 api_server.go:131] duration metric: took 3.876063215s to wait for apiserver health ...
	I0815 18:42:07.144182   67936 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:42:07.144201   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:42:07.144243   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:42:07.185685   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:07.185709   67936 cri.go:89] found id: ""
	I0815 18:42:07.185717   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:42:07.185782   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.190086   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:42:07.190179   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:42:07.233020   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:07.233044   67936 cri.go:89] found id: ""
	I0815 18:42:07.233053   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:42:07.233114   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.237639   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:42:07.237698   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:42:07.277613   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:07.277642   67936 cri.go:89] found id: ""
	I0815 18:42:07.277652   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:42:07.277714   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.282273   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:42:07.282346   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:42:07.324972   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:07.325003   67936 cri.go:89] found id: ""
	I0815 18:42:07.325013   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:42:07.325071   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.329402   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:42:07.329470   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:42:07.369812   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:07.369840   67936 cri.go:89] found id: ""
	I0815 18:42:07.369849   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:42:07.369902   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.373993   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:42:07.374055   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:42:07.412036   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:07.412062   67936 cri.go:89] found id: ""
	I0815 18:42:07.412072   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:42:07.412145   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.416191   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:42:07.416263   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:42:07.457677   67936 cri.go:89] found id: ""
	I0815 18:42:07.457710   67936 logs.go:276] 0 containers: []
	W0815 18:42:07.457721   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:42:07.457728   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:42:07.457792   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:42:07.498173   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:07.498199   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.498204   67936 cri.go:89] found id: ""
	I0815 18:42:07.498210   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:42:07.498268   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.502704   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.506501   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:42:07.506520   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:07.542685   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:07.542713   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:07.584070   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:42:07.584097   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:07.634780   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:07.634812   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.669410   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:07.669436   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:08.062406   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:42:08.062454   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:42:08.077171   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:42:08.077209   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:08.186125   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:08.186158   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:08.229621   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:42:08.229655   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:08.266791   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:08.266818   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:08.314172   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:42:08.314197   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:42:08.388793   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:08.388837   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:08.438287   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:42:08.438317   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:10.990845   67936 system_pods.go:59] 8 kube-system pods found
	I0815 18:42:10.990875   67936 system_pods.go:61] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running
	I0815 18:42:10.990879   67936 system_pods.go:61] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running
	I0815 18:42:10.990883   67936 system_pods.go:61] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running
	I0815 18:42:10.990887   67936 system_pods.go:61] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running
	I0815 18:42:10.990890   67936 system_pods.go:61] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running
	I0815 18:42:10.990894   67936 system_pods.go:61] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running
	I0815 18:42:10.990900   67936 system_pods.go:61] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:42:10.990905   67936 system_pods.go:61] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running
	I0815 18:42:10.990913   67936 system_pods.go:74] duration metric: took 3.846725869s to wait for pod list to return data ...
	I0815 18:42:10.990919   67936 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:42:10.993933   67936 default_sa.go:45] found service account: "default"
	I0815 18:42:10.993958   67936 default_sa.go:55] duration metric: took 3.032805ms for default service account to be created ...
	I0815 18:42:10.993968   67936 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:42:10.998531   67936 system_pods.go:86] 8 kube-system pods found
	I0815 18:42:10.998553   67936 system_pods.go:89] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running
	I0815 18:42:10.998558   67936 system_pods.go:89] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running
	I0815 18:42:10.998562   67936 system_pods.go:89] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running
	I0815 18:42:10.998567   67936 system_pods.go:89] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running
	I0815 18:42:10.998570   67936 system_pods.go:89] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running
	I0815 18:42:10.998575   67936 system_pods.go:89] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running
	I0815 18:42:10.998582   67936 system_pods.go:89] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:42:10.998586   67936 system_pods.go:89] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running
	I0815 18:42:10.998592   67936 system_pods.go:126] duration metric: took 4.619003ms to wait for k8s-apps to be running ...
	I0815 18:42:10.998598   67936 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:42:10.998638   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:42:11.015236   67936 system_svc.go:56] duration metric: took 16.627802ms WaitForService to wait for kubelet
	I0815 18:42:11.015260   67936 kubeadm.go:582] duration metric: took 4m20.517256799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:42:11.015280   67936 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:42:11.018544   67936 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:42:11.018570   67936 node_conditions.go:123] node cpu capacity is 2
	I0815 18:42:11.018584   67936 node_conditions.go:105] duration metric: took 3.298753ms to run NodePressure ...
	I0815 18:42:11.018598   67936 start.go:241] waiting for startup goroutines ...
	I0815 18:42:11.018611   67936 start.go:246] waiting for cluster config update ...
	I0815 18:42:11.018626   67936 start.go:255] writing updated cluster config ...
	I0815 18:42:11.018907   67936 ssh_runner.go:195] Run: rm -f paused
	I0815 18:42:11.065371   67936 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:42:11.067513   67936 out.go:177] * Done! kubectl is now configured to use "no-preload-599042" cluster and "default" namespace by default
	I0815 18:42:12.186839   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:12.187041   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:42:32.187938   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:32.188123   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.189799   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:43:12.190012   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.190023   68713 kubeadm.go:310] 
	I0815 18:43:12.190075   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:43:12.190133   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:43:12.190148   68713 kubeadm.go:310] 
	I0815 18:43:12.190205   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:43:12.190265   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:43:12.190394   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:43:12.190403   68713 kubeadm.go:310] 
	I0815 18:43:12.190523   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:43:12.190571   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:43:12.190627   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:43:12.190636   68713 kubeadm.go:310] 
	I0815 18:43:12.190772   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:43:12.190928   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:43:12.190950   68713 kubeadm.go:310] 
	I0815 18:43:12.191108   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:43:12.191218   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:43:12.191344   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:43:12.191478   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:43:12.191504   68713 kubeadm.go:310] 
	I0815 18:43:12.192283   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:43:12.192421   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:43:12.192523   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0815 18:43:12.192655   68713 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 18:43:12.192699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:43:12.658571   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:43:12.675797   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:43:12.687340   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:43:12.687370   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:43:12.687422   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:43:12.698401   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:43:12.698464   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:43:12.709632   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:43:12.720330   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:43:12.720386   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:43:12.731593   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.742122   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:43:12.742185   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.753042   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:43:12.762799   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:43:12.762855   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:43:12.772788   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:43:12.987927   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:45:08.956975   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:45:08.957069   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 18:45:08.958834   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:45:08.958904   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:45:08.958993   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:45:08.959133   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:45:08.959280   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:45:08.959376   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:45:08.961205   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:45:08.961294   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:45:08.961352   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:45:08.961424   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:45:08.961475   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:45:08.961536   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:45:08.961581   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:45:08.961637   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:45:08.961689   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:45:08.961795   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:45:08.961910   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:45:08.961971   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:45:08.962030   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:45:08.962078   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:45:08.962127   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:45:08.962214   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:45:08.962316   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:45:08.962448   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:45:08.962565   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:45:08.962626   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:45:08.962724   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:45:08.964403   68713 out.go:235]   - Booting up control plane ...
	I0815 18:45:08.964526   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:45:08.964631   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:45:08.964736   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:45:08.964866   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:45:08.965043   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:45:08.965121   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:45:08.965225   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965418   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965508   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965703   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965766   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965919   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965981   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966140   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966200   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966381   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966389   68713 kubeadm.go:310] 
	I0815 18:45:08.966438   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:45:08.966473   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:45:08.966481   68713 kubeadm.go:310] 
	I0815 18:45:08.966533   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:45:08.966580   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:45:08.966711   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:45:08.966718   68713 kubeadm.go:310] 
	I0815 18:45:08.966844   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:45:08.966900   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:45:08.966948   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:45:08.966958   68713 kubeadm.go:310] 
	I0815 18:45:08.967082   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:45:08.967201   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:45:08.967214   68713 kubeadm.go:310] 
	I0815 18:45:08.967341   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:45:08.967450   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:45:08.967548   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:45:08.967646   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:45:08.967678   68713 kubeadm.go:310] 
	I0815 18:45:08.967716   68713 kubeadm.go:394] duration metric: took 7m56.388213745s to StartCluster
	I0815 18:45:08.967768   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:45:08.967834   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:45:09.013913   68713 cri.go:89] found id: ""
	I0815 18:45:09.013943   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.013954   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:45:09.013961   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:45:09.014030   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:45:09.051370   68713 cri.go:89] found id: ""
	I0815 18:45:09.051395   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.051403   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:45:09.051409   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:45:09.051477   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:45:09.086615   68713 cri.go:89] found id: ""
	I0815 18:45:09.086646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.086653   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:45:09.086659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:45:09.086708   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:45:09.122335   68713 cri.go:89] found id: ""
	I0815 18:45:09.122370   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.122381   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:45:09.122389   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:45:09.122453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:45:09.163207   68713 cri.go:89] found id: ""
	I0815 18:45:09.163232   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.163241   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:45:09.163247   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:45:09.163308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:45:09.199396   68713 cri.go:89] found id: ""
	I0815 18:45:09.199426   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.199437   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:45:09.199444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:45:09.199504   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:45:09.235073   68713 cri.go:89] found id: ""
	I0815 18:45:09.235101   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.235112   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:45:09.235120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:45:09.235180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:45:09.271614   68713 cri.go:89] found id: ""
	I0815 18:45:09.271646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.271659   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:45:09.271671   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:45:09.271686   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:45:09.372192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:45:09.372214   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:45:09.372231   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:45:09.496743   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:45:09.496780   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:45:09.540434   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:45:09.540471   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:45:09.595546   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:45:09.595584   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 18:45:09.609831   68713 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 18:45:09.609885   68713 out.go:270] * 
	W0815 18:45:09.609942   68713 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.609956   68713 out.go:270] * 
	W0815 18:45:09.610794   68713 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 18:45:09.614213   68713 out.go:201] 
	W0815 18:45:09.615379   68713 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.615420   68713 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 18:45:09.615437   68713 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 18:45:09.616840   68713 out.go:201] 
	
	
	==> CRI-O <==
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.661041329Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747839661019530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b392d94-0126-41a3-99dd-6500924f2379 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.661587212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a1159e59-1f03-4a11-a372-c243286fd4dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.661694721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a1159e59-1f03-4a11-a372-c243286fd4dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.661906244Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbe7e7cc12a24a7735aca0a3420aa993a88b5226b3fe7139154d5de11e8a2cd,PodSandboxId:2bee519619535082644fe996c8b8fbb83d70e601fe2096259878aca2111f98db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747293941754231,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6979830-492e-4ef7-960f-2d4756de1c8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cbd61f4bfab395140a4bcfbd6c044a651fe8a6568295ec7ee7f4b5e4ca1923,PodSandboxId:661bee4cd9442dbb4799db303272bf39168ead51cc18b84e439cf9c131bd132c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293418149691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rc947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d041322-9d6b-4f46-8f58-e2991f34a297,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfbb9e69688febac549ece79607c35cd8312fd7b3aa6aacce1b6cd62087dee23,PodSandboxId:14b164f9ba75378fb5eb2dde1f5dd63af841fd35173bc236e45de1fe1818f34d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293393435426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mf6q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
5f7f959-715b-48a1-9f85-f267614182f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05f410d5291c15c563a6bcb1f17784bebfbcc573d03cf66653cc4009dcce3d60,PodSandboxId:21244b9c171a08ce8ce0df6e42b966866e2be778e8645f28b431000908dbc672,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723747292717441345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ktczt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e5b692-edd5-48fd-879b-7b8da4dea9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c021e3026550c85b8c2604df475739138eabcfe297c2068b1e3dbccb20363202,PodSandboxId:288e460fbf36cb3325b66c511b3800e477442ef7918fd5be592c9cca7575d44f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747281489372370,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7be271d3c560008ab55525ae8d1647,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9992921abedae8effdb9b902c483fbbfe9ba2137c8f11ad61f713bbe2af7b,PodSandboxId:045b1bc78063e67745939f5c01b8bf7e68904f5571e29cecb54a33fcab375408,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747281483006224,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef05ad509ee70652c70b4613499d46bbce0b9aa17ab7204b38372de527733a29,PodSandboxId:72b366a683f32ceb84eaa9817abcde93225b8cc46e31b3ed361cb0836b047fa1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747281516659176,Labels:map[string]string{io.kubernetes.container.name: kube-sch
eduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61757fe39b4aeb4552b1709a7caa21c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65efc88617436f3abba01473a06b2e072d597d3178601795075a0ab9dff0fd,PodSandboxId:81d4b953fb109076577f2ef42ccd5bb0d1ae555d6b29cfa93d9d8b9c4eb27a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747281419870562,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e277384463d451b36e4fbd6f3eedcba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d454829c8090564419cb10cf3985e4237627f74a594acfdec2f1f412d28127,PodSandboxId:28629532ce90ad5195501dc9d9c6c016481208aa96b69eb7d120166ac83f6f3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723746994145532058,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a1159e59-1f03-4a11-a372-c243286fd4dc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.699732723Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6c2b32a6-48a7-4f56-9a12-5942dedbb81a name=/runtime.v1.RuntimeService/Version
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.699822767Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c2b32a6-48a7-4f56-9a12-5942dedbb81a name=/runtime.v1.RuntimeService/Version
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.700804674Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8bc2ad7a-9bca-4499-8ab6-ad20429e8ca3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.701226912Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747839701195424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8bc2ad7a-9bca-4499-8ab6-ad20429e8ca3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.701924050Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7f8a7586-d3da-4a66-bd41-08baf0449086 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.701976134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7f8a7586-d3da-4a66-bd41-08baf0449086 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.702175794Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbe7e7cc12a24a7735aca0a3420aa993a88b5226b3fe7139154d5de11e8a2cd,PodSandboxId:2bee519619535082644fe996c8b8fbb83d70e601fe2096259878aca2111f98db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747293941754231,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6979830-492e-4ef7-960f-2d4756de1c8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cbd61f4bfab395140a4bcfbd6c044a651fe8a6568295ec7ee7f4b5e4ca1923,PodSandboxId:661bee4cd9442dbb4799db303272bf39168ead51cc18b84e439cf9c131bd132c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293418149691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rc947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d041322-9d6b-4f46-8f58-e2991f34a297,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfbb9e69688febac549ece79607c35cd8312fd7b3aa6aacce1b6cd62087dee23,PodSandboxId:14b164f9ba75378fb5eb2dde1f5dd63af841fd35173bc236e45de1fe1818f34d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293393435426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mf6q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
5f7f959-715b-48a1-9f85-f267614182f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05f410d5291c15c563a6bcb1f17784bebfbcc573d03cf66653cc4009dcce3d60,PodSandboxId:21244b9c171a08ce8ce0df6e42b966866e2be778e8645f28b431000908dbc672,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723747292717441345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ktczt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e5b692-edd5-48fd-879b-7b8da4dea9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c021e3026550c85b8c2604df475739138eabcfe297c2068b1e3dbccb20363202,PodSandboxId:288e460fbf36cb3325b66c511b3800e477442ef7918fd5be592c9cca7575d44f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747281489372370,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7be271d3c560008ab55525ae8d1647,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9992921abedae8effdb9b902c483fbbfe9ba2137c8f11ad61f713bbe2af7b,PodSandboxId:045b1bc78063e67745939f5c01b8bf7e68904f5571e29cecb54a33fcab375408,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747281483006224,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef05ad509ee70652c70b4613499d46bbce0b9aa17ab7204b38372de527733a29,PodSandboxId:72b366a683f32ceb84eaa9817abcde93225b8cc46e31b3ed361cb0836b047fa1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747281516659176,Labels:map[string]string{io.kubernetes.container.name: kube-sch
eduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61757fe39b4aeb4552b1709a7caa21c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65efc88617436f3abba01473a06b2e072d597d3178601795075a0ab9dff0fd,PodSandboxId:81d4b953fb109076577f2ef42ccd5bb0d1ae555d6b29cfa93d9d8b9c4eb27a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747281419870562,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e277384463d451b36e4fbd6f3eedcba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d454829c8090564419cb10cf3985e4237627f74a594acfdec2f1f412d28127,PodSandboxId:28629532ce90ad5195501dc9d9c6c016481208aa96b69eb7d120166ac83f6f3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723746994145532058,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7f8a7586-d3da-4a66-bd41-08baf0449086 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.746839711Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c192a01e-e49d-4f64-aa10-457b8dace414 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.746913141Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c192a01e-e49d-4f64-aa10-457b8dace414 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.749209301Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=058b2e64-f1f5-4195-b427-d41cfd1392db name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.749675106Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747839749647575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=058b2e64-f1f5-4195-b427-d41cfd1392db name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.750594889Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=39af1876-f41d-422b-8683-05caa2bad640 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.750693549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=39af1876-f41d-422b-8683-05caa2bad640 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.750889694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbe7e7cc12a24a7735aca0a3420aa993a88b5226b3fe7139154d5de11e8a2cd,PodSandboxId:2bee519619535082644fe996c8b8fbb83d70e601fe2096259878aca2111f98db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747293941754231,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6979830-492e-4ef7-960f-2d4756de1c8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cbd61f4bfab395140a4bcfbd6c044a651fe8a6568295ec7ee7f4b5e4ca1923,PodSandboxId:661bee4cd9442dbb4799db303272bf39168ead51cc18b84e439cf9c131bd132c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293418149691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rc947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d041322-9d6b-4f46-8f58-e2991f34a297,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfbb9e69688febac549ece79607c35cd8312fd7b3aa6aacce1b6cd62087dee23,PodSandboxId:14b164f9ba75378fb5eb2dde1f5dd63af841fd35173bc236e45de1fe1818f34d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293393435426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mf6q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
5f7f959-715b-48a1-9f85-f267614182f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05f410d5291c15c563a6bcb1f17784bebfbcc573d03cf66653cc4009dcce3d60,PodSandboxId:21244b9c171a08ce8ce0df6e42b966866e2be778e8645f28b431000908dbc672,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723747292717441345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ktczt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e5b692-edd5-48fd-879b-7b8da4dea9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c021e3026550c85b8c2604df475739138eabcfe297c2068b1e3dbccb20363202,PodSandboxId:288e460fbf36cb3325b66c511b3800e477442ef7918fd5be592c9cca7575d44f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747281489372370,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7be271d3c560008ab55525ae8d1647,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9992921abedae8effdb9b902c483fbbfe9ba2137c8f11ad61f713bbe2af7b,PodSandboxId:045b1bc78063e67745939f5c01b8bf7e68904f5571e29cecb54a33fcab375408,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747281483006224,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef05ad509ee70652c70b4613499d46bbce0b9aa17ab7204b38372de527733a29,PodSandboxId:72b366a683f32ceb84eaa9817abcde93225b8cc46e31b3ed361cb0836b047fa1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747281516659176,Labels:map[string]string{io.kubernetes.container.name: kube-sch
eduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61757fe39b4aeb4552b1709a7caa21c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65efc88617436f3abba01473a06b2e072d597d3178601795075a0ab9dff0fd,PodSandboxId:81d4b953fb109076577f2ef42ccd5bb0d1ae555d6b29cfa93d9d8b9c4eb27a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747281419870562,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e277384463d451b36e4fbd6f3eedcba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d454829c8090564419cb10cf3985e4237627f74a594acfdec2f1f412d28127,PodSandboxId:28629532ce90ad5195501dc9d9c6c016481208aa96b69eb7d120166ac83f6f3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723746994145532058,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=39af1876-f41d-422b-8683-05caa2bad640 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.782881079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=245a3283-e6ac-478e-8b73-de6c24ad63e5 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.782975257Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=245a3283-e6ac-478e-8b73-de6c24ad63e5 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.783982417Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a81cbf90-8cd0-4bb8-881c-bec9ce544ce3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.784371406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747839784350577,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a81cbf90-8cd0-4bb8-881c-bec9ce544ce3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.784980101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f29808f2-de5c-4f03-accc-0bfc4430fc55 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.785036247Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f29808f2-de5c-4f03-accc-0bfc4430fc55 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:50:39 embed-certs-555028 crio[731]: time="2024-08-15 18:50:39.785252849Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbe7e7cc12a24a7735aca0a3420aa993a88b5226b3fe7139154d5de11e8a2cd,PodSandboxId:2bee519619535082644fe996c8b8fbb83d70e601fe2096259878aca2111f98db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747293941754231,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6979830-492e-4ef7-960f-2d4756de1c8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cbd61f4bfab395140a4bcfbd6c044a651fe8a6568295ec7ee7f4b5e4ca1923,PodSandboxId:661bee4cd9442dbb4799db303272bf39168ead51cc18b84e439cf9c131bd132c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293418149691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rc947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d041322-9d6b-4f46-8f58-e2991f34a297,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfbb9e69688febac549ece79607c35cd8312fd7b3aa6aacce1b6cd62087dee23,PodSandboxId:14b164f9ba75378fb5eb2dde1f5dd63af841fd35173bc236e45de1fe1818f34d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293393435426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mf6q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
5f7f959-715b-48a1-9f85-f267614182f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05f410d5291c15c563a6bcb1f17784bebfbcc573d03cf66653cc4009dcce3d60,PodSandboxId:21244b9c171a08ce8ce0df6e42b966866e2be778e8645f28b431000908dbc672,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723747292717441345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ktczt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e5b692-edd5-48fd-879b-7b8da4dea9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c021e3026550c85b8c2604df475739138eabcfe297c2068b1e3dbccb20363202,PodSandboxId:288e460fbf36cb3325b66c511b3800e477442ef7918fd5be592c9cca7575d44f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747281489372370,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7be271d3c560008ab55525ae8d1647,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9992921abedae8effdb9b902c483fbbfe9ba2137c8f11ad61f713bbe2af7b,PodSandboxId:045b1bc78063e67745939f5c01b8bf7e68904f5571e29cecb54a33fcab375408,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747281483006224,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef05ad509ee70652c70b4613499d46bbce0b9aa17ab7204b38372de527733a29,PodSandboxId:72b366a683f32ceb84eaa9817abcde93225b8cc46e31b3ed361cb0836b047fa1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747281516659176,Labels:map[string]string{io.kubernetes.container.name: kube-sch
eduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61757fe39b4aeb4552b1709a7caa21c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65efc88617436f3abba01473a06b2e072d597d3178601795075a0ab9dff0fd,PodSandboxId:81d4b953fb109076577f2ef42ccd5bb0d1ae555d6b29cfa93d9d8b9c4eb27a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747281419870562,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e277384463d451b36e4fbd6f3eedcba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d454829c8090564419cb10cf3985e4237627f74a594acfdec2f1f412d28127,PodSandboxId:28629532ce90ad5195501dc9d9c6c016481208aa96b69eb7d120166ac83f6f3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723746994145532058,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f29808f2-de5c-4f03-accc-0bfc4430fc55 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bdbe7e7cc12a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   2bee519619535       storage-provisioner
	35cbd61f4bfab       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   661bee4cd9442       coredns-6f6b679f8f-rc947
	bfbb9e69688fe       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   14b164f9ba753       coredns-6f6b679f8f-mf6q4
	05f410d5291c1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   9 minutes ago       Running             kube-proxy                0                   21244b9c171a0       kube-proxy-ktczt
	ef05ad509ee70       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   9 minutes ago       Running             kube-scheduler            2                   72b366a683f32       kube-scheduler-embed-certs-555028
	c021e3026550c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   288e460fbf36c       etcd-embed-certs-555028
	e3c9992921abe       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   9 minutes ago       Running             kube-apiserver            2                   045b1bc78063e       kube-apiserver-embed-certs-555028
	8e65efc886174       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   9 minutes ago       Running             kube-controller-manager   2                   81d4b953fb109       kube-controller-manager-embed-certs-555028
	89d454829c809       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   14 minutes ago      Exited              kube-apiserver            1                   28629532ce90a       kube-apiserver-embed-certs-555028
	
	
	==> coredns [35cbd61f4bfab395140a4bcfbd6c044a651fe8a6568295ec7ee7f4b5e4ca1923] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bfbb9e69688febac549ece79607c35cd8312fd7b3aa6aacce1b6cd62087dee23] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-555028
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-555028
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=embed-certs-555028
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T18_41_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 18:41:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-555028
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:50:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 18:46:44 +0000   Thu, 15 Aug 2024 18:41:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 18:46:44 +0000   Thu, 15 Aug 2024 18:41:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 18:46:44 +0000   Thu, 15 Aug 2024 18:41:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 18:46:44 +0000   Thu, 15 Aug 2024 18:41:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.234
	  Hostname:    embed-certs-555028
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58337ac85e14457bba146b4596c6a76a
	  System UUID:                58337ac8-5e14-457b-ba14-6b4596c6a76a
	  Boot ID:                    2d528187-5591-4970-93a9-8a059bc290b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-mf6q4                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m8s
	  kube-system                 coredns-6f6b679f8f-rc947                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m8s
	  kube-system                 etcd-embed-certs-555028                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m13s
	  kube-system                 kube-apiserver-embed-certs-555028             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-controller-manager-embed-certs-555028    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-proxy-ktczt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 kube-scheduler-embed-certs-555028             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 metrics-server-6867b74b74-zkpx5               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m7s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m6s   kube-proxy       
	  Normal  Starting                 9m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m13s  kubelet          Node embed-certs-555028 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m13s  kubelet          Node embed-certs-555028 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m13s  kubelet          Node embed-certs-555028 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m9s   node-controller  Node embed-certs-555028 event: Registered Node embed-certs-555028 in Controller
	
	
	==> dmesg <==
	[  +0.049950] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039053] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.784825] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.514965] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.570133] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.825722] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.059727] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.088931] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.182480] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.146928] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.315189] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +4.227918] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +0.060750] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.650920] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +4.583842] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.854177] kauditd_printk_skb: 85 callbacks suppressed
	[Aug15 18:41] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.473520] systemd-fstab-generator[2588]: Ignoring "noauto" option for root device
	[  +4.958275] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.130220] systemd-fstab-generator[2913]: Ignoring "noauto" option for root device
	[  +4.879520] systemd-fstab-generator[3034]: Ignoring "noauto" option for root device
	[  +0.117904] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.121404] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [c021e3026550c85b8c2604df475739138eabcfe297c2068b1e3dbccb20363202] <==
	{"level":"info","ts":"2024-08-15T18:41:21.989937Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T18:41:21.990163Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b0fed9b50ef56dcc","initial-advertise-peer-urls":["https://192.168.50.234:2380"],"listen-peer-urls":["https://192.168.50.234:2380"],"advertise-client-urls":["https://192.168.50.234:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.234:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T18:41:21.990203Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T18:41:21.990282Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.234:2380"}
	{"level":"info","ts":"2024-08-15T18:41:21.990305Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.234:2380"}
	{"level":"info","ts":"2024-08-15T18:41:22.121567Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0fed9b50ef56dcc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-15T18:41:22.121653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0fed9b50ef56dcc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-15T18:41:22.121681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0fed9b50ef56dcc received MsgPreVoteResp from b0fed9b50ef56dcc at term 1"}
	{"level":"info","ts":"2024-08-15T18:41:22.121695Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0fed9b50ef56dcc became candidate at term 2"}
	{"level":"info","ts":"2024-08-15T18:41:22.121701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0fed9b50ef56dcc received MsgVoteResp from b0fed9b50ef56dcc at term 2"}
	{"level":"info","ts":"2024-08-15T18:41:22.121708Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b0fed9b50ef56dcc became leader at term 2"}
	{"level":"info","ts":"2024-08-15T18:41:22.121740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b0fed9b50ef56dcc elected leader b0fed9b50ef56dcc at term 2"}
	{"level":"info","ts":"2024-08-15T18:41:22.125950Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:41:22.128885Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b0fed9b50ef56dcc","local-member-attributes":"{Name:embed-certs-555028 ClientURLs:[https://192.168.50.234:2379]}","request-path":"/0/members/b0fed9b50ef56dcc/attributes","cluster-id":"cb9ea73d337b6d57","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T18:41:22.129442Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:41:22.129654Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T18:41:22.129705Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T18:41:22.131574Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cb9ea73d337b6d57","local-member-id":"b0fed9b50ef56dcc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:41:22.131718Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:41:22.133537Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:41:22.133596Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:41:22.135751Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:41:22.147983Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:41:22.148789Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.234:2379"}
	{"level":"info","ts":"2024-08-15T18:41:22.149833Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:50:40 up 14 min,  0 users,  load average: 0.05, 0.17, 0.16
	Linux embed-certs-555028 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [89d454829c8090564419cb10cf3985e4237627f74a594acfdec2f1f412d28127] <==
	W0815 18:41:14.281761       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.375361       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.405403       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.422560       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.444651       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.475637       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.551062       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.552414       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.559296       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.565995       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.577708       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.692164       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.742545       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.760277       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.782856       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.797698       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.852610       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.938685       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.962183       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.981079       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:15.156824       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:15.164648       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:15.164899       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:15.211726       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:15.272041       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e3c9992921abedae8effdb9b902c483fbbfe9ba2137c8f11ad61f713bbe2af7b] <==
	W0815 18:46:25.370382       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:46:25.370523       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 18:46:25.371608       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:46:25.371676       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 18:47:25.372648       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:47:25.372771       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0815 18:47:25.372655       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:47:25.372837       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0815 18:47:25.374081       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:47:25.374164       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 18:49:25.375270       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:49:25.375403       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0815 18:49:25.375449       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:49:25.375461       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0815 18:49:25.376655       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:49:25.376704       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8e65efc88617436f3abba01473a06b2e072d597d3178601795075a0ab9dff0fd] <==
	E0815 18:45:31.338801       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:45:31.785953       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:46:01.344900       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:46:01.794662       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:46:31.352326       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:46:31.802952       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:46:44.024838       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-555028"
	E0815 18:47:01.359702       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:47:01.813763       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:47:20.231906       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="296.078µs"
	E0815 18:47:31.367248       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:47:31.823762       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:47:34.228977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="65.585µs"
	E0815 18:48:01.373002       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:48:01.833208       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:48:31.380591       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:48:31.841423       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:49:01.388782       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:49:01.849596       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:49:31.395422       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:49:31.858681       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:50:01.401793       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:50:01.868374       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:50:31.409370       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:50:31.875559       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [05f410d5291c15c563a6bcb1f17784bebfbcc573d03cf66653cc4009dcce3d60] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 18:41:33.211781       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 18:41:33.255172       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.234"]
	E0815 18:41:33.255416       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 18:41:33.478156       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 18:41:33.478238       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 18:41:33.478268       1 server_linux.go:169] "Using iptables Proxier"
	I0815 18:41:33.504829       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 18:41:33.505127       1 server.go:483] "Version info" version="v1.31.0"
	I0815 18:41:33.505159       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:41:33.566402       1 config.go:197] "Starting service config controller"
	I0815 18:41:33.566448       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 18:41:33.566524       1 config.go:104] "Starting endpoint slice config controller"
	I0815 18:41:33.566546       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 18:41:33.579207       1 config.go:326] "Starting node config controller"
	I0815 18:41:33.579297       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 18:41:33.695814       1 shared_informer.go:320] Caches are synced for node config
	I0815 18:41:33.696226       1 shared_informer.go:320] Caches are synced for service config
	I0815 18:41:33.696254       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ef05ad509ee70652c70b4613499d46bbce0b9aa17ab7204b38372de527733a29] <==
	W0815 18:41:24.409881       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 18:41:24.410031       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 18:41:25.208274       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 18:41:25.208416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.219381       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 18:41:25.219594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.228358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 18:41:25.228452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.287166       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 18:41:25.287396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.293995       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 18:41:25.294073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.390122       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 18:41:25.390263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.392396       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 18:41:25.392516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.491629       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 18:41:25.492943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.557608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 18:41:25.558013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.682556       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 18:41:25.682595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.793587       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 18:41:25.793822       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0815 18:41:28.390823       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 18:49:32 embed-certs-555028 kubelet[2920]: E0815 18:49:32.215003    2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zkpx5" podUID="92e18af9-7bd1-4891-b551-06ba4b293560"
	Aug 15 18:49:37 embed-certs-555028 kubelet[2920]: E0815 18:49:37.384365    2920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747777384152272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:49:37 embed-certs-555028 kubelet[2920]: E0815 18:49:37.384410    2920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747777384152272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:49:44 embed-certs-555028 kubelet[2920]: E0815 18:49:44.214661    2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zkpx5" podUID="92e18af9-7bd1-4891-b551-06ba4b293560"
	Aug 15 18:49:47 embed-certs-555028 kubelet[2920]: E0815 18:49:47.385678    2920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747787385394208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:49:47 embed-certs-555028 kubelet[2920]: E0815 18:49:47.386009    2920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747787385394208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:49:57 embed-certs-555028 kubelet[2920]: E0815 18:49:57.215028    2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zkpx5" podUID="92e18af9-7bd1-4891-b551-06ba4b293560"
	Aug 15 18:49:57 embed-certs-555028 kubelet[2920]: E0815 18:49:57.387945    2920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747797387612808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:49:57 embed-certs-555028 kubelet[2920]: E0815 18:49:57.388038    2920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747797387612808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:07 embed-certs-555028 kubelet[2920]: E0815 18:50:07.389691    2920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747807389084359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:07 embed-certs-555028 kubelet[2920]: E0815 18:50:07.389744    2920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747807389084359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:09 embed-certs-555028 kubelet[2920]: E0815 18:50:09.215084    2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zkpx5" podUID="92e18af9-7bd1-4891-b551-06ba4b293560"
	Aug 15 18:50:17 embed-certs-555028 kubelet[2920]: E0815 18:50:17.390936    2920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747817390685688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:17 embed-certs-555028 kubelet[2920]: E0815 18:50:17.390998    2920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747817390685688,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:23 embed-certs-555028 kubelet[2920]: E0815 18:50:23.215011    2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zkpx5" podUID="92e18af9-7bd1-4891-b551-06ba4b293560"
	Aug 15 18:50:27 embed-certs-555028 kubelet[2920]: E0815 18:50:27.234250    2920 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 18:50:27 embed-certs-555028 kubelet[2920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 18:50:27 embed-certs-555028 kubelet[2920]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 18:50:27 embed-certs-555028 kubelet[2920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 18:50:27 embed-certs-555028 kubelet[2920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 18:50:27 embed-certs-555028 kubelet[2920]: E0815 18:50:27.392645    2920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747827392319331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:27 embed-certs-555028 kubelet[2920]: E0815 18:50:27.392685    2920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747827392319331,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:35 embed-certs-555028 kubelet[2920]: E0815 18:50:35.215147    2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zkpx5" podUID="92e18af9-7bd1-4891-b551-06ba4b293560"
	Aug 15 18:50:37 embed-certs-555028 kubelet[2920]: E0815 18:50:37.395851    2920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747837395246305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:37 embed-certs-555028 kubelet[2920]: E0815 18:50:37.395882    2920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747837395246305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [bdbe7e7cc12a24a7735aca0a3420aa993a88b5226b3fe7139154d5de11e8a2cd] <==
	I0815 18:41:34.051569       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 18:41:34.062388       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 18:41:34.063170       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 18:41:34.071935       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 18:41:34.072084       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-555028_b6ffbb90-3015-45d1-8de4-797eb7674e8e!
	I0815 18:41:34.073226       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a1875244-4cca-4562-be1c-7ec3504412a3", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-555028_b6ffbb90-3015-45d1-8de4-797eb7674e8e became leader
	I0815 18:41:34.172779       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-555028_b6ffbb90-3015-45d1-8de4-797eb7674e8e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-555028 -n embed-certs-555028
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-555028 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-zkpx5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-555028 describe pod metrics-server-6867b74b74-zkpx5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-555028 describe pod metrics-server-6867b74b74-zkpx5: exit status 1 (62.449383ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-zkpx5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-555028 describe pod metrics-server-6867b74b74-zkpx5: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0815 18:42:47.733471   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:42:55.298031   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:44:52.218527   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-599042 -n no-preload-599042
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-08-15 18:51:11.577185465 +0000 UTC m=+6362.555290638
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-599042 -n no-preload-599042
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-599042 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-599042 logs -n 25: (2.138918088s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-729203                           | kubernetes-upgrade-729203    | jenkins | v1.33.1 | 15 Aug 24 18:26 UTC | 15 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-498665                              | stopped-upgrade-498665       | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:27 UTC |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-729203                           | kubernetes-upgrade-729203    | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:27 UTC |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-003860                              | cert-expiration-003860       | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-003860                              | cert-expiration-003860       | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-698209 | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | disable-driver-mounts-698209                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:29 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-599042             | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-555028            | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-423062  | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-278865        | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-599042                  | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC | 15 Aug 24 18:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-555028                 | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-423062       | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-278865             | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:32:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:32:52.788403   68713 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:32:52.788704   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788715   68713 out.go:358] Setting ErrFile to fd 2...
	I0815 18:32:52.788719   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788916   68713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:32:52.789431   68713 out.go:352] Setting JSON to false
	I0815 18:32:52.790297   68713 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8119,"bootTime":1723738654,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:32:52.790355   68713 start.go:139] virtualization: kvm guest
	I0815 18:32:52.792478   68713 out.go:177] * [old-k8s-version-278865] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:32:52.793818   68713 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:32:52.793864   68713 notify.go:220] Checking for updates...
	I0815 18:32:52.796618   68713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:32:52.797914   68713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:32:52.799054   68713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:32:52.800337   68713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:32:52.801448   68713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:32:52.803087   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:32:52.803465   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.803521   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.819013   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0815 18:32:52.819447   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.819966   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.819985   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.820284   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.820482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.822582   68713 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 18:32:52.824024   68713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:32:52.824380   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.824425   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.839486   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0815 18:32:52.839905   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.840345   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.840367   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.840730   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.840904   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.876811   68713 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:32:52.878075   68713 start.go:297] selected driver: kvm2
	I0815 18:32:52.878098   68713 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.878240   68713 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:32:52.878920   68713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.879001   68713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:32:52.894158   68713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:32:52.894895   68713 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:32:52.894953   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:32:52.894969   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:32:52.895020   68713 start.go:340] cluster config:
	{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.895203   68713 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.897304   68713 out.go:177] * Starting "old-k8s-version-278865" primary control-plane node in "old-k8s-version-278865" cluster
	I0815 18:32:51.348753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:32:52.898737   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:32:52.898785   68713 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:32:52.898795   68713 cache.go:56] Caching tarball of preloaded images
	I0815 18:32:52.898861   68713 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:32:52.898871   68713 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0815 18:32:52.898962   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:32:52.899159   68713 start.go:360] acquireMachinesLock for old-k8s-version-278865: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:32:57.424754   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:00.496786   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:06.576768   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:09.648759   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:15.728760   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:18.800783   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:24.880725   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:27.952781   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:34.032763   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:37.104737   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:43.184796   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:46.260701   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:52.336771   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:55.408745   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:01.488742   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:04.560759   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:10.640771   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:13.712753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:19.792795   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:22.864720   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:28.944769   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:32.016745   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:38.096783   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:41.168739   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:47.248802   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:50.320778   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:56.400717   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:59.472780   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:05.552762   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:08.624707   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:14.704753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:17.776748   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:23.856782   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:26.932742   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:33.008795   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:36.080807   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:42.160767   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:45.232800   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:51.312780   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:54.384719   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:00.464740   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:03.536736   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:06.540805   68248 start.go:364] duration metric: took 4m1.610543673s to acquireMachinesLock for "embed-certs-555028"
	I0815 18:36:06.540869   68248 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:06.540881   68248 fix.go:54] fixHost starting: 
	I0815 18:36:06.541241   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:06.541272   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:06.556680   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0815 18:36:06.557105   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:06.557518   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:36:06.557540   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:06.557831   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:06.558059   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:06.558202   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:36:06.559702   68248 fix.go:112] recreateIfNeeded on embed-certs-555028: state=Stopped err=<nil>
	I0815 18:36:06.559724   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	W0815 18:36:06.559877   68248 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:06.561410   68248 out.go:177] * Restarting existing kvm2 VM for "embed-certs-555028" ...
	I0815 18:36:06.538474   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:06.538508   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:36:06.538805   67936 buildroot.go:166] provisioning hostname "no-preload-599042"
	I0815 18:36:06.538831   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:36:06.539016   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:36:06.540664   67936 machine.go:96] duration metric: took 4m37.431349663s to provisionDockerMachine
	I0815 18:36:06.540702   67936 fix.go:56] duration metric: took 4m37.452150687s for fixHost
	I0815 18:36:06.540709   67936 start.go:83] releasing machines lock for "no-preload-599042", held for 4m37.452172562s
	W0815 18:36:06.540732   67936 start.go:714] error starting host: provision: host is not running
	W0815 18:36:06.540801   67936 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0815 18:36:06.540809   67936 start.go:729] Will try again in 5 seconds ...
	I0815 18:36:06.562384   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Start
	I0815 18:36:06.562537   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring networks are active...
	I0815 18:36:06.563252   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring network default is active
	I0815 18:36:06.563554   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring network mk-embed-certs-555028 is active
	I0815 18:36:06.563908   68248 main.go:141] libmachine: (embed-certs-555028) Getting domain xml...
	I0815 18:36:06.564614   68248 main.go:141] libmachine: (embed-certs-555028) Creating domain...
	I0815 18:36:07.763793   68248 main.go:141] libmachine: (embed-certs-555028) Waiting to get IP...
	I0815 18:36:07.764733   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:07.765099   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:07.765200   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:07.765085   69393 retry.go:31] will retry after 206.840107ms: waiting for machine to come up
	I0815 18:36:07.973596   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:07.974069   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:07.974093   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:07.974019   69393 retry.go:31] will retry after 319.002956ms: waiting for machine to come up
	I0815 18:36:08.294670   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:08.295125   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:08.295154   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:08.295073   69393 retry.go:31] will retry after 425.99373ms: waiting for machine to come up
	I0815 18:36:08.722549   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:08.722954   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:08.722985   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:08.722903   69393 retry.go:31] will retry after 428.077891ms: waiting for machine to come up
	I0815 18:36:09.152674   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:09.153155   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:09.153187   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:09.153108   69393 retry.go:31] will retry after 476.041155ms: waiting for machine to come up
	I0815 18:36:09.630963   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:09.631456   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:09.631485   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:09.631395   69393 retry.go:31] will retry after 751.179582ms: waiting for machine to come up
	I0815 18:36:11.542364   67936 start.go:360] acquireMachinesLock for no-preload-599042: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:36:10.384466   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:10.384888   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:10.384916   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:10.384842   69393 retry.go:31] will retry after 1.028202731s: waiting for machine to come up
	I0815 18:36:11.414905   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:11.415343   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:11.415373   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:11.415283   69393 retry.go:31] will retry after 1.129105535s: waiting for machine to come up
	I0815 18:36:12.545941   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:12.546365   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:12.546387   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:12.546320   69393 retry.go:31] will retry after 1.734323812s: waiting for machine to come up
	I0815 18:36:14.283247   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:14.283622   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:14.283653   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:14.283569   69393 retry.go:31] will retry after 1.657173562s: waiting for machine to come up
	I0815 18:36:15.943329   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:15.943730   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:15.943760   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:15.943669   69393 retry.go:31] will retry after 2.349664822s: waiting for machine to come up
	I0815 18:36:18.295797   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:18.296330   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:18.296363   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:18.296264   69393 retry.go:31] will retry after 2.889119284s: waiting for machine to come up
	I0815 18:36:21.186597   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:21.186983   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:21.187004   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:21.186945   69393 retry.go:31] will retry after 2.79101595s: waiting for machine to come up
	I0815 18:36:23.981271   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.981732   68248 main.go:141] libmachine: (embed-certs-555028) Found IP for machine: 192.168.50.234
	I0815 18:36:23.981761   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has current primary IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.981770   68248 main.go:141] libmachine: (embed-certs-555028) Reserving static IP address...
	I0815 18:36:23.982166   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "embed-certs-555028", mac: "52:54:00:5c:59:7b", ip: "192.168.50.234"} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:23.982189   68248 main.go:141] libmachine: (embed-certs-555028) DBG | skip adding static IP to network mk-embed-certs-555028 - found existing host DHCP lease matching {name: "embed-certs-555028", mac: "52:54:00:5c:59:7b", ip: "192.168.50.234"}
	I0815 18:36:23.982200   68248 main.go:141] libmachine: (embed-certs-555028) Reserved static IP address: 192.168.50.234
	I0815 18:36:23.982210   68248 main.go:141] libmachine: (embed-certs-555028) Waiting for SSH to be available...
	I0815 18:36:23.982220   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Getting to WaitForSSH function...
	I0815 18:36:23.984253   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.984578   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:23.984601   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.984696   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Using SSH client type: external
	I0815 18:36:23.984720   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa (-rw-------)
	I0815 18:36:23.984752   68248 main.go:141] libmachine: (embed-certs-555028) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:36:23.984763   68248 main.go:141] libmachine: (embed-certs-555028) DBG | About to run SSH command:
	I0815 18:36:23.984772   68248 main.go:141] libmachine: (embed-certs-555028) DBG | exit 0
	I0815 18:36:24.104618   68248 main.go:141] libmachine: (embed-certs-555028) DBG | SSH cmd err, output: <nil>: 
	I0815 18:36:24.105023   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetConfigRaw
	I0815 18:36:24.105694   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:24.108191   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.108532   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.108568   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.108844   68248 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/config.json ...
	I0815 18:36:24.109037   68248 machine.go:93] provisionDockerMachine start ...
	I0815 18:36:24.109055   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:24.109249   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.111363   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.111680   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.111725   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.111821   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.111989   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.112141   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.112277   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.112454   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.112662   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.112673   68248 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:36:24.208951   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:36:24.208986   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.209255   68248 buildroot.go:166] provisioning hostname "embed-certs-555028"
	I0815 18:36:24.209285   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.209514   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.212393   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.212850   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.212878   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.213010   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.213198   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.213340   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.213466   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.213663   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.213821   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.213832   68248 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-555028 && echo "embed-certs-555028" | sudo tee /etc/hostname
	I0815 18:36:24.327157   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-555028
	
	I0815 18:36:24.327191   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.330193   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.330577   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.330607   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.330824   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.331029   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.331174   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.331325   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.331508   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.331713   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.331732   68248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-555028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-555028/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-555028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:36:24.437909   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:24.437938   68248 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:36:24.437977   68248 buildroot.go:174] setting up certificates
	I0815 18:36:24.437987   68248 provision.go:84] configureAuth start
	I0815 18:36:24.437996   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.438264   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:24.440637   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.440961   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.440993   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.441089   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.443071   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.443415   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.443448   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.443562   68248 provision.go:143] copyHostCerts
	I0815 18:36:24.443622   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:36:24.443643   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:36:24.443726   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:36:24.443843   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:36:24.443855   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:36:24.443893   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:36:24.443968   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:36:24.443977   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:36:24.444007   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:36:24.444074   68248 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.embed-certs-555028 san=[127.0.0.1 192.168.50.234 embed-certs-555028 localhost minikube]
	I0815 18:36:24.507119   68248 provision.go:177] copyRemoteCerts
	I0815 18:36:24.507177   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:36:24.507202   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.509835   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.510230   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.510260   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.510403   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.510606   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.510735   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.510842   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:24.590623   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:36:24.615635   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 18:36:24.643400   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:36:24.670394   68248 provision.go:87] duration metric: took 232.396705ms to configureAuth
	I0815 18:36:24.670415   68248 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:36:24.670609   68248 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:24.670694   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.673303   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.673685   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.673721   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.673863   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.674050   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.674222   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.674354   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.674513   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.674673   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.674688   68248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:36:25.149223   68429 start.go:364] duration metric: took 3m59.233021018s to acquireMachinesLock for "default-k8s-diff-port-423062"
	I0815 18:36:25.149295   68429 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:25.149306   68429 fix.go:54] fixHost starting: 
	I0815 18:36:25.149757   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:25.149799   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:25.166940   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41811
	I0815 18:36:25.167342   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:25.167882   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:25.167910   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:25.168179   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:25.168383   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:25.168553   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:25.170072   68429 fix.go:112] recreateIfNeeded on default-k8s-diff-port-423062: state=Stopped err=<nil>
	I0815 18:36:25.170106   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	W0815 18:36:25.170263   68429 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:25.172091   68429 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-423062" ...
	I0815 18:36:25.173641   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Start
	I0815 18:36:25.173831   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring networks are active...
	I0815 18:36:25.174594   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring network default is active
	I0815 18:36:25.174981   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring network mk-default-k8s-diff-port-423062 is active
	I0815 18:36:25.175410   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Getting domain xml...
	I0815 18:36:25.176275   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Creating domain...
	I0815 18:36:24.928110   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:36:24.928140   68248 machine.go:96] duration metric: took 819.089931ms to provisionDockerMachine
	I0815 18:36:24.928156   68248 start.go:293] postStartSetup for "embed-certs-555028" (driver="kvm2")
	I0815 18:36:24.928170   68248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:36:24.928190   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:24.928513   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:36:24.928542   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.931301   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.931756   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.931799   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.931846   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.932028   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.932311   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.932477   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.011373   68248 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:36:25.015677   68248 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:36:25.015707   68248 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:36:25.015798   68248 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:36:25.015900   68248 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:36:25.016014   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:36:25.025465   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:25.049662   68248 start.go:296] duration metric: took 121.491742ms for postStartSetup
	I0815 18:36:25.049704   68248 fix.go:56] duration metric: took 18.508823511s for fixHost
	I0815 18:36:25.049728   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.052184   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.052538   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.052583   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.052718   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.052904   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.053099   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.053271   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.053438   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:25.053604   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:25.053614   68248 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:36:25.149075   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723746985.122186042
	
	I0815 18:36:25.149095   68248 fix.go:216] guest clock: 1723746985.122186042
	I0815 18:36:25.149103   68248 fix.go:229] Guest: 2024-08-15 18:36:25.122186042 +0000 UTC Remote: 2024-08-15 18:36:25.049708543 +0000 UTC m=+260.258232753 (delta=72.477499ms)
	I0815 18:36:25.149131   68248 fix.go:200] guest clock delta is within tolerance: 72.477499ms
	I0815 18:36:25.149135   68248 start.go:83] releasing machines lock for "embed-certs-555028", held for 18.608287436s
	I0815 18:36:25.149158   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.149408   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:25.152125   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.152542   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.152568   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.152742   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153260   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153439   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153539   68248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:36:25.153587   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.153639   68248 ssh_runner.go:195] Run: cat /version.json
	I0815 18:36:25.153659   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.156311   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156504   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156740   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.156769   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156847   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.156883   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.157040   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.157122   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.157303   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.157318   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.157473   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.157479   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.157647   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.157647   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.233725   68248 ssh_runner.go:195] Run: systemctl --version
	I0815 18:36:25.253737   68248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:36:25.402047   68248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:36:25.409250   68248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:36:25.409328   68248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:36:25.426491   68248 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:36:25.426514   68248 start.go:495] detecting cgroup driver to use...
	I0815 18:36:25.426580   68248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:36:25.445177   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:36:25.459432   68248 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:36:25.459512   68248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:36:25.473777   68248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:36:25.488144   68248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:36:25.627700   68248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:36:25.791278   68248 docker.go:233] disabling docker service ...
	I0815 18:36:25.791349   68248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:36:25.810146   68248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:36:25.825131   68248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:36:25.975457   68248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:36:26.106757   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:36:26.123053   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:36:26.142739   68248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:36:26.142804   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.153163   68248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:36:26.153217   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.163863   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.175028   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.192480   68248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:36:26.208933   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.219825   68248 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.245623   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.256645   68248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:36:26.265947   68248 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:36:26.266004   68248 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:36:26.278665   68248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:36:26.289519   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:26.423656   68248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:36:26.560919   68248 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:36:26.560996   68248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:36:26.565696   68248 start.go:563] Will wait 60s for crictl version
	I0815 18:36:26.565764   68248 ssh_runner.go:195] Run: which crictl
	I0815 18:36:26.569498   68248 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:36:26.609872   68248 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:36:26.609948   68248 ssh_runner.go:195] Run: crio --version
	I0815 18:36:26.645300   68248 ssh_runner.go:195] Run: crio --version
	I0815 18:36:26.681229   68248 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:36:26.682461   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:26.685495   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:26.686011   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:26.686037   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:26.686323   68248 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 18:36:26.690590   68248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:26.703512   68248 kubeadm.go:883] updating cluster {Name:embed-certs-555028 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:36:26.703679   68248 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:36:26.703748   68248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:26.740601   68248 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:36:26.740679   68248 ssh_runner.go:195] Run: which lz4
	I0815 18:36:26.744798   68248 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:36:26.748894   68248 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:36:26.748921   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:36:28.188174   68248 crio.go:462] duration metric: took 1.443420751s to copy over tarball
	I0815 18:36:28.188254   68248 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:36:26.428013   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting to get IP...
	I0815 18:36:26.428929   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.429397   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.429513   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.429391   69513 retry.go:31] will retry after 296.45967ms: waiting for machine to come up
	I0815 18:36:26.727871   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.728273   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.728298   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.728237   69513 retry.go:31] will retry after 258.379179ms: waiting for machine to come up
	I0815 18:36:26.988915   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.989398   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.989472   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.989374   69513 retry.go:31] will retry after 418.611169ms: waiting for machine to come up
	I0815 18:36:27.409905   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.410358   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.410398   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:27.410327   69513 retry.go:31] will retry after 566.642237ms: waiting for machine to come up
	I0815 18:36:27.978717   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.979183   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.979215   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:27.979125   69513 retry.go:31] will retry after 740.292473ms: waiting for machine to come up
	I0815 18:36:28.720587   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:28.720970   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:28.721008   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:28.720941   69513 retry.go:31] will retry after 610.435484ms: waiting for machine to come up
	I0815 18:36:29.333342   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:29.333696   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:29.333731   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:29.333632   69513 retry.go:31] will retry after 1.059086771s: waiting for machine to come up
	I0815 18:36:30.394125   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:30.394560   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:30.394589   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:30.394519   69513 retry.go:31] will retry after 1.279753887s: waiting for machine to come up
	I0815 18:36:30.309340   68248 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.121056035s)
	I0815 18:36:30.309382   68248 crio.go:469] duration metric: took 2.121176349s to extract the tarball
	I0815 18:36:30.309394   68248 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:36:30.346520   68248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:30.394771   68248 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:36:30.394789   68248 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:36:30.394799   68248 kubeadm.go:934] updating node { 192.168.50.234 8443 v1.31.0 crio true true} ...
	I0815 18:36:30.394951   68248 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-555028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:36:30.395033   68248 ssh_runner.go:195] Run: crio config
	I0815 18:36:30.439636   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:36:30.439663   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:30.439678   68248 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:36:30.439707   68248 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.234 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-555028 NodeName:embed-certs-555028 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:36:30.439899   68248 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-555028"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:36:30.439976   68248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:36:30.449774   68248 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:36:30.449842   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:36:30.458892   68248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0815 18:36:30.475171   68248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:36:30.490942   68248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0815 18:36:30.507498   68248 ssh_runner.go:195] Run: grep 192.168.50.234	control-plane.minikube.internal$ /etc/hosts
	I0815 18:36:30.511254   68248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:30.522772   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:30.646060   68248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:30.667948   68248 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028 for IP: 192.168.50.234
	I0815 18:36:30.667974   68248 certs.go:194] generating shared ca certs ...
	I0815 18:36:30.667994   68248 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:30.668178   68248 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:36:30.668231   68248 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:36:30.668244   68248 certs.go:256] generating profile certs ...
	I0815 18:36:30.668360   68248 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/client.key
	I0815 18:36:30.668442   68248 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.key.539203f3
	I0815 18:36:30.668524   68248 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.key
	I0815 18:36:30.668686   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:36:30.668725   68248 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:36:30.668737   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:36:30.668774   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:36:30.668807   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:36:30.668836   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:36:30.668941   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:30.669810   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:36:30.721245   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:36:30.753016   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:36:30.782005   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:36:30.815008   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 18:36:30.847615   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:36:30.871566   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:36:30.894778   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:36:30.919167   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:36:30.942597   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:36:30.965395   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:36:30.988959   68248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:36:31.005578   68248 ssh_runner.go:195] Run: openssl version
	I0815 18:36:31.011697   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:36:31.022496   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.027102   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.027154   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.033475   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:36:31.044793   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:36:31.055793   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.060642   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.060692   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.066544   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:36:31.077637   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:36:31.088468   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.093295   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.093347   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.098908   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:36:31.109856   68248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:36:31.114519   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:36:31.120709   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:36:31.126754   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:36:31.132917   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:36:31.138739   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:36:31.144785   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:36:31.150604   68248 kubeadm.go:392] StartCluster: {Name:embed-certs-555028 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:36:31.150702   68248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:36:31.150755   68248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:31.192152   68248 cri.go:89] found id: ""
	I0815 18:36:31.192253   68248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:36:31.203076   68248 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:36:31.203099   68248 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:36:31.203151   68248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:36:31.213659   68248 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:36:31.215070   68248 kubeconfig.go:125] found "embed-certs-555028" server: "https://192.168.50.234:8443"
	I0815 18:36:31.218243   68248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:36:31.228210   68248 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.234
	I0815 18:36:31.228245   68248 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:36:31.228267   68248 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:36:31.228317   68248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:31.275944   68248 cri.go:89] found id: ""
	I0815 18:36:31.276031   68248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:36:31.294466   68248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:36:31.307241   68248 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:36:31.307276   68248 kubeadm.go:157] found existing configuration files:
	
	I0815 18:36:31.307327   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:36:31.316654   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:36:31.316722   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:36:31.326475   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:36:31.335726   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:36:31.335796   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:36:31.345063   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:36:31.353576   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:36:31.353628   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:36:31.362449   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:36:31.370717   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:36:31.370792   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:36:31.379827   68248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:36:31.389001   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:31.510611   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.158537   68248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.647891555s)
	I0815 18:36:33.158574   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.376600   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.459742   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.545503   68248 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:36:33.545595   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.046191   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.546256   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.571236   68248 api_server.go:72] duration metric: took 1.025744612s to wait for apiserver process to appear ...
	I0815 18:36:34.571275   68248 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:36:34.571297   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:31.675513   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:31.676013   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:31.676042   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:31.675960   69513 retry.go:31] will retry after 1.669099573s: waiting for machine to come up
	I0815 18:36:33.348089   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:33.348611   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:33.348639   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:33.348575   69513 retry.go:31] will retry after 1.613394267s: waiting for machine to come up
	I0815 18:36:34.963674   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:34.964187   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:34.964215   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:34.964146   69513 retry.go:31] will retry after 2.128578928s: waiting for machine to come up
	I0815 18:36:37.262138   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:37.262168   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:37.262184   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:37.310539   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:37.310569   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:37.571713   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:37.590002   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:37.590062   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:38.071526   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:38.076179   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:38.076212   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:38.571714   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:38.576518   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I0815 18:36:38.582358   68248 api_server.go:141] control plane version: v1.31.0
	I0815 18:36:38.582381   68248 api_server.go:131] duration metric: took 4.011097638s to wait for apiserver health ...
	I0815 18:36:38.582393   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:36:38.582401   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:38.584203   68248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:36:38.585513   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:36:38.604350   68248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:36:38.645538   68248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:36:38.653445   68248 system_pods.go:59] 8 kube-system pods found
	I0815 18:36:38.653476   68248 system_pods.go:61] "coredns-6f6b679f8f-sjx7c" [93a037b9-1e7c-471a-b62d-d7898b2b8287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:36:38.653486   68248 system_pods.go:61] "etcd-embed-certs-555028" [7e526b10-7acd-4d25-9847-8e11e21ba8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:36:38.653495   68248 system_pods.go:61] "kube-apiserver-embed-certs-555028" [3f317b0f-15a1-4e7d-8ca5-3cdf70dee711] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:36:38.653501   68248 system_pods.go:61] "kube-controller-manager-embed-certs-555028" [431113cd-bce9-4ecb-8233-c5463875f1b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:36:38.653506   68248 system_pods.go:61] "kube-proxy-dzwt7" [a8101c7e-c010-45a3-8746-0dc20c7ef0e2] Running
	I0815 18:36:38.653513   68248 system_pods.go:61] "kube-scheduler-embed-certs-555028" [84a5d051-d8c1-4097-b92c-e2f0d7a03385] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:36:38.653520   68248 system_pods.go:61] "metrics-server-6867b74b74-wp5rn" [222160bf-6774-49a5-9f30-7582748c8a82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:36:38.653534   68248 system_pods.go:61] "storage-provisioner" [e88c8785-2d8b-47b6-850f-e6cda74a4f5a] Running
	I0815 18:36:38.653549   68248 system_pods.go:74] duration metric: took 7.98765ms to wait for pod list to return data ...
	I0815 18:36:38.653558   68248 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:36:38.656864   68248 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:36:38.656893   68248 node_conditions.go:123] node cpu capacity is 2
	I0815 18:36:38.656906   68248 node_conditions.go:105] duration metric: took 3.340245ms to run NodePressure ...
	I0815 18:36:38.656923   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:38.918518   68248 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:36:38.923148   68248 kubeadm.go:739] kubelet initialised
	I0815 18:36:38.923168   68248 kubeadm.go:740] duration metric: took 4.62305ms waiting for restarted kubelet to initialise ...
	I0815 18:36:38.923177   68248 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:38.927933   68248 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.934928   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.934953   68248 pod_ready.go:82] duration metric: took 6.994953ms for pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.934965   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.934974   68248 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.939533   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "etcd-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.939558   68248 pod_ready.go:82] duration metric: took 4.573835ms for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.939568   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "etcd-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.939575   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.943567   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.943590   68248 pod_ready.go:82] duration metric: took 4.004869ms for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.943601   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.943608   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.049176   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:39.049203   68248 pod_ready.go:82] duration metric: took 105.585473ms for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:39.049212   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:39.049219   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dzwt7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.449514   68248 pod_ready.go:93] pod "kube-proxy-dzwt7" in "kube-system" namespace has status "Ready":"True"
	I0815 18:36:39.449539   68248 pod_ready.go:82] duration metric: took 400.311062ms for pod "kube-proxy-dzwt7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.449548   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:37.094139   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:37.094640   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:37.094670   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:37.094583   69513 retry.go:31] will retry after 2.268267509s: waiting for machine to come up
	I0815 18:36:39.365595   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:39.365975   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:39.366007   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:39.365938   69513 retry.go:31] will retry after 3.286154075s: waiting for machine to come up
	I0815 18:36:44.301710   68713 start.go:364] duration metric: took 3m51.402501772s to acquireMachinesLock for "old-k8s-version-278865"
	I0815 18:36:44.301771   68713 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:44.301792   68713 fix.go:54] fixHost starting: 
	I0815 18:36:44.302227   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:44.302265   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:44.319819   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I0815 18:36:44.320335   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:44.320975   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:36:44.321003   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:44.321380   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:44.321572   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:36:44.321720   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetState
	I0815 18:36:44.323551   68713 fix.go:112] recreateIfNeeded on old-k8s-version-278865: state=Stopped err=<nil>
	I0815 18:36:44.323586   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	W0815 18:36:44.323748   68713 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:44.325761   68713 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-278865" ...
	I0815 18:36:41.456648   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:43.456917   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:42.653801   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.654221   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has current primary IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.654251   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Found IP for machine: 192.168.61.7
	I0815 18:36:42.654268   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Reserving static IP address...
	I0815 18:36:42.654714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-423062", mac: "52:54:00:83:9a:f2", ip: "192.168.61.7"} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.654759   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | skip adding static IP to network mk-default-k8s-diff-port-423062 - found existing host DHCP lease matching {name: "default-k8s-diff-port-423062", mac: "52:54:00:83:9a:f2", ip: "192.168.61.7"}
	I0815 18:36:42.654778   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Reserved static IP address: 192.168.61.7
	I0815 18:36:42.654798   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for SSH to be available...
	I0815 18:36:42.654815   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Getting to WaitForSSH function...
	I0815 18:36:42.657618   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.657968   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.657996   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.658093   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Using SSH client type: external
	I0815 18:36:42.658115   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa (-rw-------)
	I0815 18:36:42.658200   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:36:42.658223   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | About to run SSH command:
	I0815 18:36:42.658234   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | exit 0
	I0815 18:36:42.780714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | SSH cmd err, output: <nil>: 
	I0815 18:36:42.781095   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetConfigRaw
	I0815 18:36:42.781734   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:42.784384   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.784820   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.784853   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.785137   68429 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/config.json ...
	I0815 18:36:42.785364   68429 machine.go:93] provisionDockerMachine start ...
	I0815 18:36:42.785390   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:42.785599   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:42.788083   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.788436   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.788465   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.788655   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:42.788833   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.789006   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.789145   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:42.789301   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:42.789607   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:42.789625   68429 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:36:42.889002   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:36:42.889031   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:42.889237   68429 buildroot.go:166] provisioning hostname "default-k8s-diff-port-423062"
	I0815 18:36:42.889260   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:42.889434   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:42.892072   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.892422   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.892445   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.892645   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:42.892846   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.892995   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.893148   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:42.893286   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:42.893490   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:42.893505   68429 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-423062 && echo "default-k8s-diff-port-423062" | sudo tee /etc/hostname
	I0815 18:36:43.008310   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-423062
	
	I0815 18:36:43.008347   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.011091   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.011446   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.011472   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.011653   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.011864   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.012027   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.012159   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.012321   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:43.012518   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:43.012537   68429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-423062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-423062/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-423062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:36:43.121095   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:43.121123   68429 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:36:43.121157   68429 buildroot.go:174] setting up certificates
	I0815 18:36:43.121172   68429 provision.go:84] configureAuth start
	I0815 18:36:43.121186   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:43.121510   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:43.123863   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.124178   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.124200   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.124312   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.126385   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.126633   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.126667   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.126784   68429 provision.go:143] copyHostCerts
	I0815 18:36:43.126861   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:36:43.126884   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:36:43.126944   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:36:43.127052   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:36:43.127062   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:36:43.127090   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:36:43.127177   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:36:43.127187   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:36:43.127215   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:36:43.127286   68429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-423062 san=[127.0.0.1 192.168.61.7 default-k8s-diff-port-423062 localhost minikube]
	I0815 18:36:43.627396   68429 provision.go:177] copyRemoteCerts
	I0815 18:36:43.627460   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:36:43.627485   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.630025   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.630311   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.630340   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.630479   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.630670   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.630850   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.630976   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:43.712571   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:36:43.738904   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0815 18:36:43.764328   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 18:36:43.787211   68429 provision.go:87] duration metric: took 666.026026ms to configureAuth
	I0815 18:36:43.787241   68429 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:36:43.787467   68429 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:43.787567   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.789803   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.790210   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.790232   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.790432   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.790604   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.790729   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.790905   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.791021   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:43.791169   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:43.791187   68429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:36:44.067277   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:36:44.067307   68429 machine.go:96] duration metric: took 1.281926749s to provisionDockerMachine
	I0815 18:36:44.067322   68429 start.go:293] postStartSetup for "default-k8s-diff-port-423062" (driver="kvm2")
	I0815 18:36:44.067335   68429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:36:44.067360   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.067711   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:36:44.067749   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.070224   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.070543   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.070573   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.070740   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.070925   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.071079   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.071269   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.152784   68429 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:36:44.157264   68429 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:36:44.157291   68429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:36:44.157364   68429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:36:44.157461   68429 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:36:44.157580   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:36:44.168520   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:44.195223   68429 start.go:296] duration metric: took 127.886016ms for postStartSetup
	I0815 18:36:44.195268   68429 fix.go:56] duration metric: took 19.045962302s for fixHost
	I0815 18:36:44.195292   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.197711   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.198065   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.198090   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.198281   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.198438   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.198614   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.198768   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.198959   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:44.199154   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:44.199172   68429 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:36:44.301519   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747004.273982003
	
	I0815 18:36:44.301543   68429 fix.go:216] guest clock: 1723747004.273982003
	I0815 18:36:44.301553   68429 fix.go:229] Guest: 2024-08-15 18:36:44.273982003 +0000 UTC Remote: 2024-08-15 18:36:44.195273929 +0000 UTC m=+258.412094909 (delta=78.708074ms)
	I0815 18:36:44.301598   68429 fix.go:200] guest clock delta is within tolerance: 78.708074ms
	I0815 18:36:44.301606   68429 start.go:83] releasing machines lock for "default-k8s-diff-port-423062", held for 19.152336719s
	I0815 18:36:44.301638   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.301903   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:44.305012   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.305498   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.305524   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.305742   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306240   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306425   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306533   68429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:36:44.306595   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.306689   68429 ssh_runner.go:195] Run: cat /version.json
	I0815 18:36:44.306714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.309649   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.309838   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310098   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.310133   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310250   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.310267   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.310296   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310434   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.310457   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.310634   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.310654   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.310794   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.310798   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.310947   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.412125   68429 ssh_runner.go:195] Run: systemctl --version
	I0815 18:36:44.420070   68429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:36:44.566014   68429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:36:44.572209   68429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:36:44.572283   68429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:36:44.593041   68429 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:36:44.593067   68429 start.go:495] detecting cgroup driver to use...
	I0815 18:36:44.593145   68429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:36:44.613683   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:36:44.627766   68429 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:36:44.627851   68429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:36:44.641172   68429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:36:44.654952   68429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:36:44.778684   68429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:36:44.965548   68429 docker.go:233] disabling docker service ...
	I0815 18:36:44.965631   68429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:36:44.983153   68429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:36:44.999109   68429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:36:45.131097   68429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:36:45.270930   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:36:45.287846   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:36:45.309345   68429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:36:45.309407   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.320032   68429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:36:45.320092   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.331647   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.342499   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.353192   68429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:36:45.364163   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.381124   68429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.403692   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.415032   68429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:36:45.424798   68429 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:36:45.424859   68429 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:36:45.439077   68429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:36:45.448554   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:45.570697   68429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:36:45.719575   68429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:36:45.719655   68429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:36:45.724415   68429 start.go:563] Will wait 60s for crictl version
	I0815 18:36:45.724476   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:36:45.728443   68429 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:36:45.770935   68429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:36:45.771023   68429 ssh_runner.go:195] Run: crio --version
	I0815 18:36:45.799588   68429 ssh_runner.go:195] Run: crio --version
	I0815 18:36:45.830915   68429 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:36:44.327259   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .Start
	I0815 18:36:44.327431   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring networks are active...
	I0815 18:36:44.328116   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network default is active
	I0815 18:36:44.328601   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network mk-old-k8s-version-278865 is active
	I0815 18:36:44.329081   68713 main.go:141] libmachine: (old-k8s-version-278865) Getting domain xml...
	I0815 18:36:44.331888   68713 main.go:141] libmachine: (old-k8s-version-278865) Creating domain...
	I0815 18:36:45.633882   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting to get IP...
	I0815 18:36:45.634842   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.635216   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.635286   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.635206   69670 retry.go:31] will retry after 300.377534ms: waiting for machine to come up
	I0815 18:36:45.937793   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.938290   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.938312   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.938236   69670 retry.go:31] will retry after 282.311084ms: waiting for machine to come up
	I0815 18:36:46.222856   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.223327   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.223350   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.223283   69670 retry.go:31] will retry after 354.299649ms: waiting for machine to come up
	I0815 18:36:46.578770   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.579337   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.579360   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.579241   69670 retry.go:31] will retry after 382.947645ms: waiting for machine to come up
	I0815 18:36:46.964003   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.964911   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.964943   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.964824   69670 retry.go:31] will retry after 710.757442ms: waiting for machine to come up
	I0815 18:36:47.676738   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:47.677422   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:47.677450   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:47.677360   69670 retry.go:31] will retry after 588.944709ms: waiting for machine to come up
	I0815 18:36:45.957776   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:48.456345   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:45.832411   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:45.835145   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:45.835523   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:45.835553   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:45.835762   68429 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 18:36:45.840347   68429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:45.854348   68429 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-423062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:36:45.854471   68429 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:36:45.854527   68429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:45.899238   68429 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:36:45.899320   68429 ssh_runner.go:195] Run: which lz4
	I0815 18:36:45.903367   68429 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:36:45.907499   68429 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:36:45.907526   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:36:47.317850   68429 crio.go:462] duration metric: took 1.414524229s to copy over tarball
	I0815 18:36:47.317929   68429 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:36:49.443172   68429 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.125212316s)
	I0815 18:36:49.443206   68429 crio.go:469] duration metric: took 2.125324606s to extract the tarball
	I0815 18:36:49.443215   68429 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:36:49.483693   68429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:49.535588   68429 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:36:49.535617   68429 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:36:49.535627   68429 kubeadm.go:934] updating node { 192.168.61.7 8444 v1.31.0 crio true true} ...
	I0815 18:36:49.535753   68429 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-423062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:36:49.535843   68429 ssh_runner.go:195] Run: crio config
	I0815 18:36:49.587186   68429 cni.go:84] Creating CNI manager for ""
	I0815 18:36:49.587215   68429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:49.587232   68429 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:36:49.587257   68429 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.7 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-423062 NodeName:default-k8s-diff-port-423062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:36:49.587447   68429 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-423062"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:36:49.587520   68429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:36:49.598312   68429 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:36:49.598376   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:36:49.608382   68429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0815 18:36:49.624449   68429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:36:49.647224   68429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0815 18:36:49.664848   68429 ssh_runner.go:195] Run: grep 192.168.61.7	control-plane.minikube.internal$ /etc/hosts
	I0815 18:36:49.668582   68429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:49.680786   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:49.804940   68429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:49.826104   68429 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062 for IP: 192.168.61.7
	I0815 18:36:49.826130   68429 certs.go:194] generating shared ca certs ...
	I0815 18:36:49.826147   68429 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:49.826281   68429 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:36:49.826322   68429 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:36:49.826331   68429 certs.go:256] generating profile certs ...
	I0815 18:36:49.826403   68429 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.key
	I0815 18:36:49.826461   68429 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.key.534debab
	I0815 18:36:49.826528   68429 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.key
	I0815 18:36:49.826667   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:36:49.826713   68429 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:36:49.826725   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:36:49.826748   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:36:49.826777   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:36:49.826810   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:36:49.826868   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:49.827597   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:36:49.855678   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:36:49.891292   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:36:49.928612   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:36:49.961506   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 18:36:49.993955   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:36:50.019275   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:36:50.046773   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:36:50.074201   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:36:50.101491   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:36:50.125378   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:36:50.149974   68429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:36:50.166393   68429 ssh_runner.go:195] Run: openssl version
	I0815 18:36:50.172182   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:36:50.182755   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.187110   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.187155   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.192956   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:36:50.203680   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:36:50.214269   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.218876   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.218925   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.224463   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:36:50.234811   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:36:50.245585   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.250397   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.250446   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.256189   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:36:50.267342   68429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:36:50.272011   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:36:50.278217   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:36:50.284300   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:36:50.290402   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:36:50.296174   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:36:50.301957   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:36:50.307807   68429 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-423062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:36:50.307910   68429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:36:50.307973   68429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:50.359833   68429 cri.go:89] found id: ""
	I0815 18:36:50.359923   68429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:36:50.370306   68429 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:36:50.370324   68429 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:36:50.370379   68429 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:36:50.379585   68429 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:36:50.380510   68429 kubeconfig.go:125] found "default-k8s-diff-port-423062" server: "https://192.168.61.7:8444"
	I0815 18:36:50.384136   68429 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:36:50.393393   68429 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.7
	I0815 18:36:50.393428   68429 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:36:50.393441   68429 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:36:50.393494   68429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:50.428085   68429 cri.go:89] found id: ""
	I0815 18:36:50.428162   68429 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:36:50.444032   68429 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:36:50.454927   68429 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:36:50.454948   68429 kubeadm.go:157] found existing configuration files:
	
	I0815 18:36:50.455000   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0815 18:36:50.464733   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:36:50.464797   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:36:50.473973   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0815 18:36:50.482861   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:36:50.482910   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:36:50.492213   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0815 18:36:50.501173   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:36:50.501230   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:36:50.510299   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0815 18:36:50.519262   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:36:50.519308   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:36:50.528632   68429 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:36:50.537914   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:50.655230   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:48.268221   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:48.268790   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:48.268814   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:48.268736   69670 retry.go:31] will retry after 781.489196ms: waiting for machine to come up
	I0815 18:36:49.051824   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:49.052246   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:49.052277   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:49.052182   69670 retry.go:31] will retry after 1.393037007s: waiting for machine to come up
	I0815 18:36:50.446428   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:50.446860   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:50.446892   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:50.446800   69670 retry.go:31] will retry after 1.826779004s: waiting for machine to come up
	I0815 18:36:52.275716   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:52.276208   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:52.276231   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:52.276167   69670 retry.go:31] will retry after 1.746726312s: waiting for machine to come up
	I0815 18:36:50.458388   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:52.147996   68248 pod_ready.go:93] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:36:52.148026   68248 pod_ready.go:82] duration metric: took 12.698470185s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:52.148039   68248 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:54.153927   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:51.670903   68429 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.015612511s)
	I0815 18:36:51.670943   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:51.985806   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:52.069082   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:52.189200   68429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:36:52.189298   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:52.689767   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:53.189633   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:53.205099   68429 api_server.go:72] duration metric: took 1.015908263s to wait for apiserver process to appear ...
	I0815 18:36:53.205136   68429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:36:53.205162   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:53.205695   68429 api_server.go:269] stopped: https://192.168.61.7:8444/healthz: Get "https://192.168.61.7:8444/healthz": dial tcp 192.168.61.7:8444: connect: connection refused
	I0815 18:36:53.705285   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:55.721139   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:55.721177   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:55.721193   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:55.750790   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:55.750825   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:56.205675   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:56.212464   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:56.212509   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:56.705700   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:56.716232   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:56.716277   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:57.205663   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:57.211081   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0815 18:36:57.217736   68429 api_server.go:141] control plane version: v1.31.0
	I0815 18:36:57.217763   68429 api_server.go:131] duration metric: took 4.012620084s to wait for apiserver health ...
	I0815 18:36:57.217772   68429 cni.go:84] Creating CNI manager for ""
	I0815 18:36:57.217778   68429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:57.219455   68429 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:36:54.025067   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:54.025508   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:54.025535   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:54.025462   69670 retry.go:31] will retry after 2.693215306s: waiting for machine to come up
	I0815 18:36:56.721740   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:56.722139   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:56.722178   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:56.722070   69670 retry.go:31] will retry after 3.370623363s: waiting for machine to come up
	I0815 18:36:57.220672   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:36:57.241710   68429 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:36:57.262714   68429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:36:57.272766   68429 system_pods.go:59] 8 kube-system pods found
	I0815 18:36:57.272822   68429 system_pods.go:61] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:36:57.272836   68429 system_pods.go:61] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:36:57.272849   68429 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:36:57.272862   68429 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:36:57.272872   68429 system_pods.go:61] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:36:57.272887   68429 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:36:57.272896   68429 system_pods.go:61] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:36:57.272902   68429 system_pods.go:61] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:36:57.272913   68429 system_pods.go:74] duration metric: took 10.175415ms to wait for pod list to return data ...
	I0815 18:36:57.272924   68429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:36:57.276880   68429 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:36:57.276915   68429 node_conditions.go:123] node cpu capacity is 2
	I0815 18:36:57.276929   68429 node_conditions.go:105] duration metric: took 3.998879ms to run NodePressure ...
	I0815 18:36:57.276951   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:57.554251   68429 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:36:57.558062   68429 kubeadm.go:739] kubelet initialised
	I0815 18:36:57.558084   68429 kubeadm.go:740] duration metric: took 3.811943ms waiting for restarted kubelet to initialise ...
	I0815 18:36:57.558091   68429 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:57.562470   68429 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.567212   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.567232   68429 pod_ready.go:82] duration metric: took 4.742538ms for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.567240   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.567245   68429 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.571217   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.571237   68429 pod_ready.go:82] duration metric: took 3.984908ms for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.571247   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.571255   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.575456   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.575494   68429 pod_ready.go:82] duration metric: took 4.232215ms for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.575507   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.575515   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.665876   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.665902   68429 pod_ready.go:82] duration metric: took 90.37918ms for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.665914   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.665921   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.066377   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-proxy-bnxv7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.066402   68429 pod_ready.go:82] duration metric: took 400.475025ms for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.066411   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-proxy-bnxv7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.066426   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.465739   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.465767   68429 pod_ready.go:82] duration metric: took 399.331024ms for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.465779   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.465787   68429 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.866772   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.866798   68429 pod_ready.go:82] duration metric: took 401.001046ms for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.866809   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.866817   68429 pod_ready.go:39] duration metric: took 1.308717049s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:58.866835   68429 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:36:58.878274   68429 ops.go:34] apiserver oom_adj: -16
	I0815 18:36:58.878298   68429 kubeadm.go:597] duration metric: took 8.507965813s to restartPrimaryControlPlane
	I0815 18:36:58.878308   68429 kubeadm.go:394] duration metric: took 8.570508558s to StartCluster
	I0815 18:36:58.878327   68429 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:58.878499   68429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:36:58.879927   68429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:58.880213   68429 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:36:58.880262   68429 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:36:58.880339   68429 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-423062"
	I0815 18:36:58.880375   68429 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-423062"
	I0815 18:36:58.880374   68429 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-423062"
	W0815 18:36:58.880383   68429 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:36:58.880367   68429 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-423062"
	I0815 18:36:58.880403   68429 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-423062"
	W0815 18:36:58.880410   68429 addons.go:243] addon metrics-server should already be in state true
	I0815 18:36:58.880414   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.880422   68429 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:58.880428   68429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-423062"
	I0815 18:36:58.880434   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.880772   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880778   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880801   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.880820   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.880826   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880855   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.882047   68429 out.go:177] * Verifying Kubernetes components...
	I0815 18:36:58.883440   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:58.895575   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I0815 18:36:58.895577   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37567
	I0815 18:36:58.895739   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39491
	I0815 18:36:58.896031   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896063   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896121   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896511   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896529   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896612   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896631   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896749   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896768   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896917   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.896963   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.897099   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.897132   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.897483   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.897527   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.897535   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.897558   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.900773   68429 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-423062"
	W0815 18:36:58.900796   68429 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:36:58.900825   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.901206   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.901238   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.912877   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0815 18:36:58.912903   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37245
	I0815 18:36:58.913271   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.913344   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.913835   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.913845   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.913852   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.913862   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.914177   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.914218   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.914361   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.914408   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.916165   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.916601   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.918553   68429 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:36:58.918560   68429 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:36:56.154697   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:58.654414   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:58.919539   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44177
	I0815 18:36:58.919773   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:36:58.919790   68429 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:36:58.919809   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.919884   68429 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:36:58.919900   68429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:36:58.919916   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.919945   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.920330   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.920343   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.920777   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.921363   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.921401   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.923262   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.923629   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.923656   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.923684   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.924108   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.924256   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.924319   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.924337   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.924501   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.924564   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.924688   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:58.924773   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.924944   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.925266   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:58.938064   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38697
	I0815 18:36:58.938411   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.938762   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.938782   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.939057   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.939214   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.941134   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.941395   68429 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:36:58.941414   68429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:36:58.941436   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.943936   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.944331   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.944355   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.944594   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.944765   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.944900   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.944977   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:59.069466   68429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:59.090259   68429 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-423062" to be "Ready" ...
	I0815 18:36:59.203591   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:36:59.232676   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:36:59.232705   68429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:36:59.273079   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:36:59.287625   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:36:59.287653   68429 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:36:59.359798   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:36:59.359821   68429 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:36:59.406350   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:00.373429   68429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16980511s)
	I0815 18:37:00.373477   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373495   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373501   68429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.10037967s)
	I0815 18:37:00.373546   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373563   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373787   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.373805   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.373848   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.373852   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.373863   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.373866   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.373890   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373903   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373879   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373937   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.374313   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.374322   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.374326   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.374344   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.374355   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.379434   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.379450   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.379666   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.379679   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.389853   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.389872   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.390148   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.390152   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.390173   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.390181   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.390189   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.390396   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.390447   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.390461   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.390475   68429 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-423062"
	I0815 18:37:00.392530   68429 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 18:37:00.393703   68429 addons.go:510] duration metric: took 1.51344438s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 18:37:00.093896   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:00.094391   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:37:00.094453   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:37:00.094333   69670 retry.go:31] will retry after 2.855023319s: waiting for machine to come up
	I0815 18:37:04.297557   67936 start.go:364] duration metric: took 52.755115386s to acquireMachinesLock for "no-preload-599042"
	I0815 18:37:04.297614   67936 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:37:04.297639   67936 fix.go:54] fixHost starting: 
	I0815 18:37:04.298066   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:04.298096   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:04.317897   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42493
	I0815 18:37:04.318309   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:04.318797   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:04.318822   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:04.319191   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:04.319388   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:04.319543   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:04.320970   67936 fix.go:112] recreateIfNeeded on no-preload-599042: state=Stopped err=<nil>
	I0815 18:37:04.320994   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	W0815 18:37:04.321164   67936 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:37:04.322689   67936 out.go:177] * Restarting existing kvm2 VM for "no-preload-599042" ...
	I0815 18:37:00.654833   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:03.154235   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:02.950449   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950903   68713 main.go:141] libmachine: (old-k8s-version-278865) Found IP for machine: 192.168.39.89
	I0815 18:37:02.950931   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has current primary IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950941   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserving static IP address...
	I0815 18:37:02.951319   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.951356   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | skip adding static IP to network mk-old-k8s-version-278865 - found existing host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"}
	I0815 18:37:02.951376   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserved static IP address: 192.168.39.89
	I0815 18:37:02.951393   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting for SSH to be available...
	I0815 18:37:02.951424   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Getting to WaitForSSH function...
	I0815 18:37:02.953498   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.953804   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953927   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH client type: external
	I0815 18:37:02.953957   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa (-rw-------)
	I0815 18:37:02.953989   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:37:02.954001   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | About to run SSH command:
	I0815 18:37:02.954009   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | exit 0
	I0815 18:37:03.076431   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | SSH cmd err, output: <nil>: 
	I0815 18:37:03.076748   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetConfigRaw
	I0815 18:37:03.077325   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.079733   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080100   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.080132   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080332   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:37:03.080537   68713 machine.go:93] provisionDockerMachine start ...
	I0815 18:37:03.080554   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:03.080717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.082778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083140   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.083168   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083331   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.083482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083612   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083730   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.083881   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.084067   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.084078   68713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:37:03.188779   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:37:03.188813   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189045   68713 buildroot.go:166] provisioning hostname "old-k8s-version-278865"
	I0815 18:37:03.189069   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189284   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.191858   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192171   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.192192   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192328   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.192533   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192676   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192822   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.193015   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.193180   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.193192   68713 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-278865 && echo "old-k8s-version-278865" | sudo tee /etc/hostname
	I0815 18:37:03.313099   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-278865
	
	I0815 18:37:03.313129   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.315840   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316196   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.316226   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316378   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.316608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316760   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316885   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.317001   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.317184   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.317207   68713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-278865' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-278865/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-278865' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:37:03.429897   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:37:03.429934   68713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:37:03.429962   68713 buildroot.go:174] setting up certificates
	I0815 18:37:03.429972   68713 provision.go:84] configureAuth start
	I0815 18:37:03.429983   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.430274   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.432724   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433053   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.433083   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433212   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.435181   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435514   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.435543   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435657   68713 provision.go:143] copyHostCerts
	I0815 18:37:03.435715   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:37:03.435736   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:37:03.435804   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:37:03.435919   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:37:03.435929   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:37:03.435959   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:37:03.436045   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:37:03.436055   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:37:03.436082   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:37:03.436170   68713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-278865 san=[127.0.0.1 192.168.39.89 localhost minikube old-k8s-version-278865]
	I0815 18:37:03.604924   68713 provision.go:177] copyRemoteCerts
	I0815 18:37:03.604979   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:37:03.605003   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.607328   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607616   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.607634   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607821   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.608016   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.608171   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.608429   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:03.690560   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:37:03.714632   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 18:37:03.737805   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:37:03.762338   68713 provision.go:87] duration metric: took 332.353741ms to configureAuth
	I0815 18:37:03.762371   68713 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:37:03.762543   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:37:03.762608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.765626   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.765988   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.766018   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.766211   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.766380   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766574   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766712   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.766897   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.767053   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.767069   68713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:37:04.050635   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:37:04.050663   68713 machine.go:96] duration metric: took 970.113556ms to provisionDockerMachine
	I0815 18:37:04.050674   68713 start.go:293] postStartSetup for "old-k8s-version-278865" (driver="kvm2")
	I0815 18:37:04.050685   68713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:37:04.050717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.051048   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:37:04.051081   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.053709   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054095   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.054124   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054432   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.054622   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.054774   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.054914   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.139381   68713 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:37:04.145097   68713 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:37:04.145124   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:37:04.145201   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:37:04.145298   68713 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:37:04.145421   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:37:04.156166   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:04.181562   68713 start.go:296] duration metric: took 130.872499ms for postStartSetup
	I0815 18:37:04.181605   68713 fix.go:56] duration metric: took 19.879821037s for fixHost
	I0815 18:37:04.181629   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.184268   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184652   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.184682   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184917   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.185151   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185345   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185502   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.185677   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:04.185925   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:04.185938   68713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:37:04.297391   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747024.271483326
	
	I0815 18:37:04.297413   68713 fix.go:216] guest clock: 1723747024.271483326
	I0815 18:37:04.297423   68713 fix.go:229] Guest: 2024-08-15 18:37:04.271483326 +0000 UTC Remote: 2024-08-15 18:37:04.181610291 +0000 UTC m=+251.426055371 (delta=89.873035ms)
	I0815 18:37:04.297448   68713 fix.go:200] guest clock delta is within tolerance: 89.873035ms
	I0815 18:37:04.297455   68713 start.go:83] releasing machines lock for "old-k8s-version-278865", held for 19.99571173s
	I0815 18:37:04.297504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.297818   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:04.300970   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301425   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.301455   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301609   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302194   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302404   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302495   68713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:37:04.302545   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.302679   68713 ssh_runner.go:195] Run: cat /version.json
	I0815 18:37:04.302705   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.305673   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.305903   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306066   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306092   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306273   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306301   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306337   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306537   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306657   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306664   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306827   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306834   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.307009   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.409319   68713 ssh_runner.go:195] Run: systemctl --version
	I0815 18:37:04.415576   68713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:37:04.565772   68713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:37:04.571909   68713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:37:04.571996   68713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:37:04.588400   68713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:37:04.588427   68713 start.go:495] detecting cgroup driver to use...
	I0815 18:37:04.588528   68713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:37:04.604253   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:37:04.619003   68713 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:37:04.619051   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:37:04.632530   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:37:04.646080   68713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:37:04.763855   68713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:37:04.922470   68713 docker.go:233] disabling docker service ...
	I0815 18:37:04.922566   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:37:04.937301   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:37:04.950721   68713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:37:05.079767   68713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:37:05.210207   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:37:05.225569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:37:05.247998   68713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 18:37:05.248070   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.262851   68713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:37:05.262924   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.274489   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.285901   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.298749   68713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:37:05.310052   68713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:37:05.320992   68713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:37:05.321073   68713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:37:05.340323   68713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:37:05.354069   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:05.483573   68713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:37:05.647020   68713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:37:05.647094   68713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:37:05.653850   68713 start.go:563] Will wait 60s for crictl version
	I0815 18:37:05.653924   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:05.658476   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:37:05.697818   68713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:37:05.697907   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.724931   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.755831   68713 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 18:37:01.094934   68429 node_ready.go:53] node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:37:03.594364   68429 node_ready.go:53] node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:37:05.756950   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:05.759791   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760188   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:05.760220   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760468   68713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:37:05.764753   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:05.777462   68713 kubeadm.go:883] updating cluster {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:37:05.777614   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:37:05.777679   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:05.848895   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:05.848967   68713 ssh_runner.go:195] Run: which lz4
	I0815 18:37:05.853103   68713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:37:05.858012   68713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:37:05.858046   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 18:37:07.520567   68713 crio.go:462] duration metric: took 1.667489785s to copy over tarball
	I0815 18:37:07.520642   68713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:37:04.324093   67936 main.go:141] libmachine: (no-preload-599042) Calling .Start
	I0815 18:37:04.324263   67936 main.go:141] libmachine: (no-preload-599042) Ensuring networks are active...
	I0815 18:37:04.325099   67936 main.go:141] libmachine: (no-preload-599042) Ensuring network default is active
	I0815 18:37:04.325778   67936 main.go:141] libmachine: (no-preload-599042) Ensuring network mk-no-preload-599042 is active
	I0815 18:37:04.326007   67936 main.go:141] libmachine: (no-preload-599042) Getting domain xml...
	I0815 18:37:04.328184   67936 main.go:141] libmachine: (no-preload-599042) Creating domain...
	I0815 18:37:05.626206   67936 main.go:141] libmachine: (no-preload-599042) Waiting to get IP...
	I0815 18:37:05.627374   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:05.627877   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:05.627935   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:05.627844   69876 retry.go:31] will retry after 199.774188ms: waiting for machine to come up
	I0815 18:37:05.829673   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:05.830213   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:05.830240   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:05.830170   69876 retry.go:31] will retry after 255.850483ms: waiting for machine to come up
	I0815 18:37:06.087766   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:06.088378   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:06.088405   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:06.088330   69876 retry.go:31] will retry after 351.231421ms: waiting for machine to come up
	I0815 18:37:06.440937   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:06.441597   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:06.441626   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:06.441572   69876 retry.go:31] will retry after 602.620924ms: waiting for machine to come up
	I0815 18:37:07.046269   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:07.046745   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:07.046769   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:07.046712   69876 retry.go:31] will retry after 578.450642ms: waiting for machine to come up
	I0815 18:37:07.627330   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:07.627832   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:07.627859   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:07.627791   69876 retry.go:31] will retry after 731.331176ms: waiting for machine to come up
	I0815 18:37:08.361310   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:08.361746   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:08.361776   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:08.361706   69876 retry.go:31] will retry after 1.089237688s: waiting for machine to come up
	I0815 18:37:05.157378   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:07.162990   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:09.654672   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:06.093822   68429 node_ready.go:49] node "default-k8s-diff-port-423062" has status "Ready":"True"
	I0815 18:37:06.093853   68429 node_ready.go:38] duration metric: took 7.003558244s for node "default-k8s-diff-port-423062" to be "Ready" ...
	I0815 18:37:06.093867   68429 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:06.103462   68429 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.111214   68429 pod_ready.go:93] pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:06.111235   68429 pod_ready.go:82] duration metric: took 7.746382ms for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.111244   68429 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.117713   68429 pod_ready.go:93] pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:06.117739   68429 pod_ready.go:82] duration metric: took 6.487608ms for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.117750   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:08.126216   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:10.128095   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:10.534169   68713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.013498464s)
	I0815 18:37:10.534194   68713 crio.go:469] duration metric: took 3.013602868s to extract the tarball
	I0815 18:37:10.534201   68713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:37:10.578998   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:10.619043   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:10.619146   68713 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:37:10.619246   68713 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.619247   68713 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.619278   68713 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 18:37:10.619275   68713 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.619291   68713 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.619304   68713 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.619322   68713 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.619405   68713 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621367   68713 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.621384   68713 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 18:37:10.621468   68713 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.621500   68713 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.621596   68713 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.621646   68713 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621706   68713 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.621897   68713 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.798617   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.828530   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 18:37:10.859528   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.918714   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.977028   68713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 18:37:10.977073   68713 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.977119   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:10.980573   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.985503   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.990642   68713 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 18:37:10.990684   68713 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 18:37:10.990733   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.000388   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.007526   68713 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 18:37:11.007589   68713 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.007642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.008543   68713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 18:37:11.008581   68713 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.008621   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.008642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077224   68713 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 18:37:11.077269   68713 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077228   68713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 18:37:11.077347   68713 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.077371   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111299   68713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 18:37:11.111376   68713 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.111387   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.111421   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111471   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.156942   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.156944   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.156997   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.263355   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.263448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.263455   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.263544   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.291407   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.312626   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.334606   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.427937   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.433739   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:11.435371   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.439448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.439541   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 18:37:11.450901   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.477906   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.520009   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 18:37:11.572349   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 18:37:11.686243   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 18:37:11.686295   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 18:37:11.686325   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 18:37:11.686378   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 18:37:11.686420   68713 cache_images.go:92] duration metric: took 1.067250234s to LoadCachedImages
	W0815 18:37:11.686494   68713 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0815 18:37:11.686508   68713 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.20.0 crio true true} ...
	I0815 18:37:11.686620   68713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-278865 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:37:11.686693   68713 ssh_runner.go:195] Run: crio config
	I0815 18:37:11.736781   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:37:11.736808   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:11.736824   68713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:37:11.736851   68713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-278865 NodeName:old-k8s-version-278865 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 18:37:11.737039   68713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-278865"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:37:11.737120   68713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 18:37:11.747511   68713 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:37:11.747585   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:37:11.757850   68713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 18:37:11.775982   68713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:37:11.792938   68713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 18:37:11.811576   68713 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I0815 18:37:11.815708   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:11.829992   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:11.983884   68713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:12.002603   68713 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865 for IP: 192.168.39.89
	I0815 18:37:12.002632   68713 certs.go:194] generating shared ca certs ...
	I0815 18:37:12.002682   68713 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.002867   68713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:37:12.002926   68713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:37:12.002942   68713 certs.go:256] generating profile certs ...
	I0815 18:37:12.025160   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.key
	I0815 18:37:12.025296   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a
	I0815 18:37:12.025351   68713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key
	I0815 18:37:12.025516   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:37:12.025578   68713 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:37:12.025591   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:37:12.025627   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:37:12.025661   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:37:12.025691   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:37:12.025746   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:12.026614   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:37:12.066771   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:37:12.109649   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:37:12.176744   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:37:12.207990   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 18:37:12.244999   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:37:12.282338   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:37:12.308761   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:37:12.332316   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:37:12.355977   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:37:12.379169   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:37:12.405472   68713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:37:12.424110   68713 ssh_runner.go:195] Run: openssl version
	I0815 18:37:12.430231   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:37:12.441531   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.445971   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.446061   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.452134   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:37:12.466809   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:37:12.478211   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482659   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482708   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.490225   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:37:12.504908   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:37:12.516825   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521854   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521911   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.527884   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:37:12.539398   68713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:37:12.544010   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:37:12.549918   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:37:12.555714   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:37:12.561895   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:37:12.567736   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:37:12.573664   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:37:12.579510   68713 kubeadm.go:392] StartCluster: {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:37:12.579627   68713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:37:12.579688   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.621503   68713 cri.go:89] found id: ""
	I0815 18:37:12.621576   68713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:37:12.632722   68713 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:37:12.632746   68713 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:37:12.632796   68713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:37:12.643192   68713 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:37:12.644607   68713 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-278865" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:37:12.645629   68713 kubeconfig.go:62] /home/jenkins/minikube-integration/19450-13013/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-278865" cluster setting kubeconfig missing "old-k8s-version-278865" context setting]
	I0815 18:37:12.647073   68713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.653052   68713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:37:12.665777   68713 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.89
	I0815 18:37:12.665808   68713 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:37:12.665821   68713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:37:12.665872   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.713574   68713 cri.go:89] found id: ""
	I0815 18:37:12.713641   68713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:37:12.731459   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:37:12.741769   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:37:12.741789   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:37:12.741833   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:37:12.750990   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:37:12.751049   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:37:12.761621   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:37:12.771204   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:37:12.771261   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:37:12.782012   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:37:09.452971   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:09.453451   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:09.453494   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:09.453393   69876 retry.go:31] will retry after 1.35461204s: waiting for machine to come up
	I0815 18:37:10.809664   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:10.810127   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:10.810158   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:10.810065   69876 retry.go:31] will retry after 1.709820883s: waiting for machine to come up
	I0815 18:37:12.521458   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:12.521988   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:12.522016   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:12.521930   69876 retry.go:31] will retry after 1.401971708s: waiting for machine to come up
	I0815 18:37:13.925401   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:13.925868   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:13.925898   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:13.925824   69876 retry.go:31] will retry after 2.768002946s: waiting for machine to come up
	I0815 18:37:11.655451   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:14.154561   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:12.400960   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:13.128357   68429 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.128379   68429 pod_ready.go:82] duration metric: took 7.010621879s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.128389   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.136617   68429 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.136638   68429 pod_ready.go:82] duration metric: took 8.242471ms for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.136648   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.143530   68429 pod_ready.go:93] pod "kube-proxy-bnxv7" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.143551   68429 pod_ready.go:82] duration metric: took 6.895931ms for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.143563   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.151691   68429 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.151721   68429 pod_ready.go:82] duration metric: took 8.149821ms for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.151735   68429 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:15.158172   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:12.791928   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:37:12.791994   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:37:12.801858   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:37:12.811023   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:37:12.811083   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:37:12.822189   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:37:12.834293   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:12.974325   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.452192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.690442   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.798270   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.900783   68713 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:37:13.900877   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.401954   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.901809   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.401755   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.901010   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.401794   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.901149   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:17.401599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.694999   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:16.695488   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:16.695506   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:16.695430   69876 retry.go:31] will retry after 2.308386075s: waiting for machine to come up
	I0815 18:37:16.154692   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:18.653763   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:17.159197   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:19.159442   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:17.901511   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.401720   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.900976   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.401223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.901522   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.901573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:22.401279   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.005581   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:19.005979   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:19.006008   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:19.005930   69876 retry.go:31] will retry after 2.758801207s: waiting for machine to come up
	I0815 18:37:21.766860   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.767286   67936 main.go:141] libmachine: (no-preload-599042) Found IP for machine: 192.168.72.14
	I0815 18:37:21.767303   67936 main.go:141] libmachine: (no-preload-599042) Reserving static IP address...
	I0815 18:37:21.767314   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has current primary IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.767722   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "no-preload-599042", mac: "52:54:00:d1:54:6d", ip: "192.168.72.14"} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.767745   67936 main.go:141] libmachine: (no-preload-599042) Reserved static IP address: 192.168.72.14
	I0815 18:37:21.767757   67936 main.go:141] libmachine: (no-preload-599042) DBG | skip adding static IP to network mk-no-preload-599042 - found existing host DHCP lease matching {name: "no-preload-599042", mac: "52:54:00:d1:54:6d", ip: "192.168.72.14"}
	I0815 18:37:21.767768   67936 main.go:141] libmachine: (no-preload-599042) DBG | Getting to WaitForSSH function...
	I0815 18:37:21.767780   67936 main.go:141] libmachine: (no-preload-599042) Waiting for SSH to be available...
	I0815 18:37:21.769674   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.769950   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.769973   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.770072   67936 main.go:141] libmachine: (no-preload-599042) DBG | Using SSH client type: external
	I0815 18:37:21.770103   67936 main.go:141] libmachine: (no-preload-599042) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa (-rw-------)
	I0815 18:37:21.770134   67936 main.go:141] libmachine: (no-preload-599042) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:37:21.770147   67936 main.go:141] libmachine: (no-preload-599042) DBG | About to run SSH command:
	I0815 18:37:21.770162   67936 main.go:141] libmachine: (no-preload-599042) DBG | exit 0
	I0815 18:37:21.888536   67936 main.go:141] libmachine: (no-preload-599042) DBG | SSH cmd err, output: <nil>: 
	I0815 18:37:21.888900   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetConfigRaw
	I0815 18:37:21.889541   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:21.892351   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.892730   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.892760   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.892976   67936 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/config.json ...
	I0815 18:37:21.893181   67936 machine.go:93] provisionDockerMachine start ...
	I0815 18:37:21.893203   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:21.893404   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:21.895471   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.895774   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.895812   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.895967   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:21.896153   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.896334   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.896522   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:21.896697   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:21.896872   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:21.896884   67936 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:37:21.992598   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:37:21.992622   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:21.992856   67936 buildroot.go:166] provisioning hostname "no-preload-599042"
	I0815 18:37:21.992884   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:21.993095   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:21.995586   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.995902   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.995930   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.996051   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:21.996239   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.996375   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.996538   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:21.996691   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:21.996869   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:21.996884   67936 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-599042 && echo "no-preload-599042" | sudo tee /etc/hostname
	I0815 18:37:22.106513   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-599042
	
	I0815 18:37:22.106553   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.109655   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.110111   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.110143   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.110362   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.110548   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.110718   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.110838   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.110970   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.111141   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.111162   67936 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-599042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-599042/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-599042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:37:22.221858   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:37:22.221898   67936 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:37:22.221924   67936 buildroot.go:174] setting up certificates
	I0815 18:37:22.221938   67936 provision.go:84] configureAuth start
	I0815 18:37:22.221956   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:22.222278   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:22.225058   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.225374   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.225410   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.225544   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.227539   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.227885   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.227929   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.228052   67936 provision.go:143] copyHostCerts
	I0815 18:37:22.228111   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:37:22.228126   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:37:22.228190   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:37:22.228273   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:37:22.228282   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:37:22.228301   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:37:22.228352   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:37:22.228359   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:37:22.228375   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:37:22.228428   67936 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.no-preload-599042 san=[127.0.0.1 192.168.72.14 localhost minikube no-preload-599042]
	I0815 18:37:22.383520   67936 provision.go:177] copyRemoteCerts
	I0815 18:37:22.383578   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:37:22.383601   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.386048   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.386303   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.386338   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.386566   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.386722   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.386894   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.387036   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:22.470828   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:37:22.494929   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:37:22.519545   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 18:37:22.544417   67936 provision.go:87] duration metric: took 322.465732ms to configureAuth
	I0815 18:37:22.544442   67936 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:37:22.544661   67936 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:37:22.544736   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.547284   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.547610   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.547641   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.547876   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.548076   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.548271   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.548413   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.548594   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.548795   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.548818   67936 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:37:22.803896   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:37:22.803924   67936 machine.go:96] duration metric: took 910.728961ms to provisionDockerMachine
	I0815 18:37:22.803935   67936 start.go:293] postStartSetup for "no-preload-599042" (driver="kvm2")
	I0815 18:37:22.803945   67936 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:37:22.803959   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:22.804274   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:37:22.804322   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.807041   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.807437   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.807467   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.807570   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.807747   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.807906   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.808002   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:22.887667   67936 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:37:22.892368   67936 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:37:22.892393   67936 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:37:22.892480   67936 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:37:22.892588   67936 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:37:22.892681   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:37:22.901987   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:22.927782   67936 start.go:296] duration metric: took 123.834401ms for postStartSetup
	I0815 18:37:22.927823   67936 fix.go:56] duration metric: took 18.630196933s for fixHost
	I0815 18:37:22.927848   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.930378   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.930728   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.930755   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.930868   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.931043   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.931226   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.931386   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.931538   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.931705   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.931718   67936 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:37:23.029393   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747042.997661196
	
	I0815 18:37:23.029423   67936 fix.go:216] guest clock: 1723747042.997661196
	I0815 18:37:23.029433   67936 fix.go:229] Guest: 2024-08-15 18:37:22.997661196 +0000 UTC Remote: 2024-08-15 18:37:22.927828036 +0000 UTC m=+353.975665928 (delta=69.83316ms)
	I0815 18:37:23.029455   67936 fix.go:200] guest clock delta is within tolerance: 69.83316ms
	I0815 18:37:23.029465   67936 start.go:83] releasing machines lock for "no-preload-599042", held for 18.731874864s
	I0815 18:37:23.029491   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.029730   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:23.031885   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.032242   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.032261   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.032449   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.032908   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.033062   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.033149   67936 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:37:23.033197   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:23.033303   67936 ssh_runner.go:195] Run: cat /version.json
	I0815 18:37:23.033322   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:23.035943   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.035987   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036327   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.036433   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.036463   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036482   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036657   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:23.036836   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:23.036855   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:23.036966   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:23.037039   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:23.037119   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:23.037183   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:23.037242   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:23.117399   67936 ssh_runner.go:195] Run: systemctl --version
	I0815 18:37:23.138614   67936 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:37:23.287862   67936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:37:23.293943   67936 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:37:23.294013   67936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:37:23.310957   67936 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:37:23.310987   67936 start.go:495] detecting cgroup driver to use...
	I0815 18:37:23.311067   67936 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:37:23.326641   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:37:23.340650   67936 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:37:23.340708   67936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:37:23.355401   67936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:37:23.369033   67936 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:37:23.480891   67936 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:37:23.629690   67936 docker.go:233] disabling docker service ...
	I0815 18:37:23.629782   67936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:37:23.644372   67936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:37:23.658312   67936 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:37:23.779999   67936 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:37:23.902630   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:37:23.917453   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:37:23.935696   67936 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:37:23.935749   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.946031   67936 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:37:23.946106   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.956639   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.967148   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.978049   67936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:37:23.989000   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.999290   67936 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:24.017002   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:24.027432   67936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:37:24.036714   67936 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:37:24.036770   67936 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:37:24.048956   67936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:37:24.058269   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:24.173548   67936 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:37:24.316383   67936 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:37:24.316462   67936 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:37:24.321726   67936 start.go:563] Will wait 60s for crictl version
	I0815 18:37:24.321803   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.325718   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:37:24.362995   67936 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:37:24.363099   67936 ssh_runner.go:195] Run: crio --version
	I0815 18:37:24.392678   67936 ssh_runner.go:195] Run: crio --version
	I0815 18:37:24.424128   67936 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:37:20.654186   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:23.154893   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:21.658499   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:24.159865   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:22.901608   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.401519   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.901287   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.401831   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.901547   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.401220   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.901109   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.401441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.901515   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:27.401258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.425451   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:24.428263   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:24.428631   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:24.428656   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:24.428833   67936 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 18:37:24.433343   67936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:24.446011   67936 kubeadm.go:883] updating cluster {Name:no-preload-599042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:37:24.446123   67936 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:37:24.446168   67936 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:24.484321   67936 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:37:24.484346   67936 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:37:24.484417   67936 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:24.484429   67936 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.484444   67936 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.484470   67936 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.484472   67936 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.484581   67936 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.484583   67936 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 18:37:24.484585   67936 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.485836   67936 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.485844   67936 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 18:37:24.485852   67936 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.485846   67936 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.485836   67936 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.485837   67936 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.485846   67936 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:24.485906   67936 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.646217   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.653405   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.658441   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.662835   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.662858   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.681979   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.715361   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0815 18:37:24.722352   67936 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0815 18:37:24.722391   67936 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.722450   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.787439   67936 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0815 18:37:24.787486   67936 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.787530   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.810570   67936 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0815 18:37:24.810606   67936 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0815 18:37:24.810612   67936 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.810630   67936 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.810666   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.810667   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.841566   67936 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0815 18:37:24.841617   67936 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.841669   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.841698   67936 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0815 18:37:24.841743   67936 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.841800   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.950875   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.950918   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.950933   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.950989   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.951004   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.951052   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.079551   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:25.079601   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:25.079634   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:25.084852   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:25.084874   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.084910   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:25.216095   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:25.216235   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:25.216308   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:25.216384   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:25.216400   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.216431   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:25.336055   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 18:37:25.336126   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 18:37:25.336180   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.336222   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:25.336181   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 18:37:25.336320   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:25.352527   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 18:37:25.352566   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 18:37:25.352592   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0815 18:37:25.352639   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:25.352650   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:25.352702   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:25.355747   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0815 18:37:25.355764   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.355769   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0815 18:37:25.355797   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.355806   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0815 18:37:25.363222   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0815 18:37:25.363257   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0815 18:37:25.363435   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0815 18:37:25.476601   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:28.142118   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.786287506s)
	I0815 18:37:28.142134   67936 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.665496935s)
	I0815 18:37:28.142146   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0815 18:37:28.142177   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:28.142190   67936 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0815 18:37:28.142220   67936 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:28.142244   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:28.142259   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:25.155516   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:27.156071   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:29.157389   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:26.658491   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:28.659080   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:27.901777   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.401103   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.901746   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.401521   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.901691   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.401326   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.901672   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.401534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.901013   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:32.401696   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.598348   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.456076001s)
	I0815 18:37:29.598380   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0815 18:37:29.598404   67936 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:29.598407   67936 ssh_runner.go:235] Completed: which crictl: (1.456124508s)
	I0815 18:37:29.598451   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:29.598474   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:31.495864   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.897383444s)
	I0815 18:37:31.495897   67936 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.897403663s)
	I0815 18:37:31.495902   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0815 18:37:31.495931   67936 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:31.495968   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:31.495968   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:31.657799   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:34.156377   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:31.158308   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:33.159177   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:35.668218   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:32.901441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.901095   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.401705   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.901020   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.401019   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.901094   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.400952   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.901717   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:37.401701   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.526372   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.030374686s)
	I0815 18:37:35.526410   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0815 18:37:35.526422   67936 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.030343547s)
	I0815 18:37:35.526438   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:35.526482   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:35.526483   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:35.570806   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 18:37:35.570906   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:37.500059   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.973499408s)
	I0815 18:37:37.500098   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0815 18:37:37.500120   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:37.500072   67936 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.929150036s)
	I0815 18:37:37.500208   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0815 18:37:37.500161   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:36.157239   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:38.656856   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:38.158685   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:40.158728   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:37.901353   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.401426   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.901599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.401173   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.901593   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.401758   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.401698   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:42.401409   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.563532   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.063281797s)
	I0815 18:37:39.563562   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0815 18:37:39.563595   67936 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:39.563642   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:40.208180   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 18:37:40.208232   67936 cache_images.go:123] Successfully loaded all cached images
	I0815 18:37:40.208240   67936 cache_images.go:92] duration metric: took 15.723882738s to LoadCachedImages
	I0815 18:37:40.208252   67936 kubeadm.go:934] updating node { 192.168.72.14 8443 v1.31.0 crio true true} ...
	I0815 18:37:40.208416   67936 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-599042 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:37:40.208544   67936 ssh_runner.go:195] Run: crio config
	I0815 18:37:40.261526   67936 cni.go:84] Creating CNI manager for ""
	I0815 18:37:40.261545   67936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:40.261552   67936 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:37:40.261572   67936 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.14 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-599042 NodeName:no-preload-599042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:37:40.261688   67936 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-599042"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:37:40.261742   67936 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:37:40.271844   67936 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:37:40.271921   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:37:40.280957   67936 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0815 18:37:40.297378   67936 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:37:40.313215   67936 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0815 18:37:40.329640   67936 ssh_runner.go:195] Run: grep 192.168.72.14	control-plane.minikube.internal$ /etc/hosts
	I0815 18:37:40.333331   67936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:40.344805   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:40.457352   67936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:40.475219   67936 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042 for IP: 192.168.72.14
	I0815 18:37:40.475238   67936 certs.go:194] generating shared ca certs ...
	I0815 18:37:40.475252   67936 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:40.475416   67936 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:37:40.475475   67936 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:37:40.475489   67936 certs.go:256] generating profile certs ...
	I0815 18:37:40.475591   67936 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.key
	I0815 18:37:40.475670   67936 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.key.15ba6898
	I0815 18:37:40.475714   67936 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.key
	I0815 18:37:40.475865   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:37:40.475904   67936 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:37:40.475917   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:37:40.475950   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:37:40.475978   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:37:40.476012   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:37:40.476069   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:40.476738   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:37:40.513554   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:37:40.549095   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:37:40.578010   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:37:40.612637   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 18:37:40.639974   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 18:37:40.672937   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:37:40.696889   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 18:37:40.721258   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:37:40.744104   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:37:40.766463   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:37:40.788628   67936 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:37:40.805346   67936 ssh_runner.go:195] Run: openssl version
	I0815 18:37:40.811193   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:37:40.822610   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.826918   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.826969   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.832544   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:37:40.843338   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:37:40.854032   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.858512   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.858563   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.864247   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:37:40.874724   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:37:40.885538   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.889849   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.889899   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.895258   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:37:40.906841   67936 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:37:40.911629   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:37:40.918085   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:37:40.924194   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:37:40.930009   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:37:40.935634   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:37:40.941168   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:37:40.946761   67936 kubeadm.go:392] StartCluster: {Name:no-preload-599042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:37:40.946836   67936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:37:40.946874   67936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:40.990733   67936 cri.go:89] found id: ""
	I0815 18:37:40.990808   67936 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:37:41.002969   67936 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:37:41.002988   67936 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:37:41.003041   67936 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:37:41.013722   67936 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:37:41.015079   67936 kubeconfig.go:125] found "no-preload-599042" server: "https://192.168.72.14:8443"
	I0815 18:37:41.017905   67936 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:37:41.029240   67936 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.14
	I0815 18:37:41.029271   67936 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:37:41.029284   67936 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:37:41.029326   67936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:41.064689   67936 cri.go:89] found id: ""
	I0815 18:37:41.064754   67936 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:37:41.085195   67936 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:37:41.096355   67936 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:37:41.096375   67936 kubeadm.go:157] found existing configuration files:
	
	I0815 18:37:41.096425   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:37:41.106887   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:37:41.106941   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:37:41.117599   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:37:41.127956   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:37:41.128020   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:37:41.137384   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:37:41.146075   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:37:41.146122   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:37:41.156417   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:37:41.165287   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:37:41.165325   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:37:41.174245   67936 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:37:41.183335   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:41.314804   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.422591   67936 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.107749325s)
	I0815 18:37:42.422628   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.642065   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.710265   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.791233   67936 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:37:42.791334   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.291538   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.791682   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.831611   67936 api_server.go:72] duration metric: took 1.040390925s to wait for apiserver process to appear ...
	I0815 18:37:43.831641   67936 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:37:43.831662   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:43.832110   67936 api_server.go:269] stopped: https://192.168.72.14:8443/healthz: Get "https://192.168.72.14:8443/healthz": dial tcp 192.168.72.14:8443: connect: connection refused
	I0815 18:37:41.154701   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:43.655756   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:42.661385   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:45.158918   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:42.901106   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.401146   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.901869   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.401483   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.901302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.401505   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.901504   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.401025   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.901713   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:47.401588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.332554   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.112640   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:37:47.112668   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:37:47.112681   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.244211   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.244246   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:47.332375   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.339129   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.339153   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:47.831731   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.836308   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.836330   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:48.331914   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:48.336314   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:48.336347   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:48.831862   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:48.836012   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 200:
	ok
	I0815 18:37:48.842971   67936 api_server.go:141] control plane version: v1.31.0
	I0815 18:37:48.842996   67936 api_server.go:131] duration metric: took 5.011346791s to wait for apiserver health ...
	I0815 18:37:48.843008   67936 cni.go:84] Creating CNI manager for ""
	I0815 18:37:48.843015   67936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:48.844939   67936 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:37:48.846262   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:37:48.857335   67936 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:37:48.876370   67936 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:37:48.886582   67936 system_pods.go:59] 8 kube-system pods found
	I0815 18:37:48.886628   67936 system_pods.go:61] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:37:48.886640   67936 system_pods.go:61] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:37:48.886653   67936 system_pods.go:61] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:37:48.886666   67936 system_pods.go:61] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:37:48.886679   67936 system_pods.go:61] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 18:37:48.886691   67936 system_pods.go:61] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:37:48.886701   67936 system_pods.go:61] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:37:48.886711   67936 system_pods.go:61] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 18:37:48.886722   67936 system_pods.go:74] duration metric: took 10.329234ms to wait for pod list to return data ...
	I0815 18:37:48.886736   67936 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:37:48.890525   67936 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:37:48.890560   67936 node_conditions.go:123] node cpu capacity is 2
	I0815 18:37:48.890571   67936 node_conditions.go:105] duration metric: took 3.828616ms to run NodePressure ...
	I0815 18:37:48.890590   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:46.155548   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:48.655549   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:49.183845   67936 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:37:49.188602   67936 kubeadm.go:739] kubelet initialised
	I0815 18:37:49.188629   67936 kubeadm.go:740] duration metric: took 4.755371ms waiting for restarted kubelet to initialise ...
	I0815 18:37:49.188639   67936 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:49.193101   67936 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.199195   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.199215   67936 pod_ready.go:82] duration metric: took 6.088761ms for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.199226   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.199236   67936 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.205076   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "etcd-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.205095   67936 pod_ready.go:82] duration metric: took 5.848521ms for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.205105   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "etcd-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.205111   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.210559   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-apiserver-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.210578   67936 pod_ready.go:82] duration metric: took 5.449861ms for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.210587   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-apiserver-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.210594   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.281799   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.281828   67936 pod_ready.go:82] duration metric: took 71.206144ms for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.281840   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.281850   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.680097   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-proxy-bwb9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.680121   67936 pod_ready.go:82] duration metric: took 398.261641ms for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.680131   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-proxy-bwb9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.680136   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:50.080391   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-scheduler-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.080415   67936 pod_ready.go:82] duration metric: took 400.272871ms for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:50.080425   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-scheduler-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.080430   67936 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:50.482715   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.482744   67936 pod_ready.go:82] duration metric: took 402.304556ms for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:50.482753   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.482761   67936 pod_ready.go:39] duration metric: took 1.294109816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:50.482779   67936 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:37:50.495888   67936 ops.go:34] apiserver oom_adj: -16
	I0815 18:37:50.495912   67936 kubeadm.go:597] duration metric: took 9.4929178s to restartPrimaryControlPlane
	I0815 18:37:50.495924   67936 kubeadm.go:394] duration metric: took 9.549167115s to StartCluster
	I0815 18:37:50.495943   67936 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:50.496020   67936 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:37:50.497743   67936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:50.497976   67936 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:37:50.498166   67936 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:37:50.498225   67936 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:37:50.498251   67936 addons.go:69] Setting storage-provisioner=true in profile "no-preload-599042"
	I0815 18:37:50.498266   67936 addons.go:69] Setting default-storageclass=true in profile "no-preload-599042"
	I0815 18:37:50.498287   67936 addons.go:234] Setting addon storage-provisioner=true in "no-preload-599042"
	I0815 18:37:50.498303   67936 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-599042"
	W0815 18:37:50.498311   67936 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:37:50.498343   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.498708   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.498733   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.498745   67936 addons.go:69] Setting metrics-server=true in profile "no-preload-599042"
	I0815 18:37:50.498753   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.498783   67936 addons.go:234] Setting addon metrics-server=true in "no-preload-599042"
	W0815 18:37:50.498795   67936 addons.go:243] addon metrics-server should already be in state true
	I0815 18:37:50.498734   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.499070   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.499350   67936 out.go:177] * Verifying Kubernetes components...
	I0815 18:37:50.499436   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.499467   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.500629   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:50.514727   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
	I0815 18:37:50.514956   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0815 18:37:50.515112   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.515379   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.515622   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.515639   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.515844   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.515866   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.516028   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.516697   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.516741   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.516854   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.517455   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.517487   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.517879   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0815 18:37:50.518180   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.518645   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.518666   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.518975   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.519155   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.522283   67936 addons.go:234] Setting addon default-storageclass=true in "no-preload-599042"
	W0815 18:37:50.522301   67936 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:37:50.522321   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.522589   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.522616   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.533306   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I0815 18:37:50.533891   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.534378   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.534403   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.535077   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.535251   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.536333   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I0815 18:37:50.536960   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.537421   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.537484   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.537500   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.537581   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40905
	I0815 18:37:50.537832   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.537992   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.538044   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.538964   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.538983   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.539442   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.539494   67936 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:37:50.540127   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.540138   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.540166   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.540633   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:37:50.540653   67936 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:37:50.540673   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.541641   67936 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:47.658449   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:50.159642   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:50.542848   67936 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:37:50.542867   67936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:37:50.542883   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.544059   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.544644   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.544669   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.544879   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.545056   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.545226   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.545363   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.545609   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.545957   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.545984   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.546188   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.546350   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.546459   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.546563   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.576049   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0815 18:37:50.576398   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.576963   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.576991   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.577315   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.577536   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.579041   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.579244   67936 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:37:50.579259   67936 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:37:50.579273   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.583471   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.583857   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.583884   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.583984   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.584140   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.584298   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.584431   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.711232   67936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:50.738297   67936 node_ready.go:35] waiting up to 6m0s for node "no-preload-599042" to be "Ready" ...
	I0815 18:37:50.787041   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:37:50.876459   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:37:50.926707   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:37:50.926727   67936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:37:50.967734   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:37:50.967764   67936 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:37:50.994557   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:50.994580   67936 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:37:51.018573   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:51.217167   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.217199   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.217511   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.217561   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.217570   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.217579   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.217592   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.217846   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.217889   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.217900   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.223755   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.223774   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.224006   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.224024   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.794888   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.794919   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.795198   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.795227   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.795240   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.795256   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.795267   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.795503   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.795521   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936158   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.936178   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.936438   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.936467   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.936505   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936519   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.936528   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.936754   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.936773   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936785   67936 addons.go:475] Verifying addon metrics-server=true in "no-preload-599042"
	I0815 18:37:51.938619   67936 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0815 18:37:47.901026   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.401023   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.901661   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.401358   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.901410   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.401040   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.901695   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.401365   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.901733   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:52.401439   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.939743   67936 addons.go:510] duration metric: took 1.441583595s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0815 18:37:52.742152   67936 node_ready.go:53] node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:51.155350   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:53.654487   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:52.658151   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:54.658269   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:52.901361   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.401417   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.901380   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.401820   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.901113   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.401270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.900941   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.901834   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:57.401496   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.242506   67936 node_ready.go:53] node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:57.742723   67936 node_ready.go:49] node "no-preload-599042" has status "Ready":"True"
	I0815 18:37:57.742746   67936 node_ready.go:38] duration metric: took 7.00442012s for node "no-preload-599042" to be "Ready" ...
	I0815 18:37:57.742764   67936 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:57.747927   67936 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:57.752478   67936 pod_ready.go:93] pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:57.752513   67936 pod_ready.go:82] duration metric: took 4.560553ms for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:57.752524   67936 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.760896   67936 pod_ready.go:93] pod "etcd-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.760924   67936 pod_ready.go:82] duration metric: took 1.008390436s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.760937   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.774529   67936 pod_ready.go:93] pod "kube-apiserver-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.774557   67936 pod_ready.go:82] duration metric: took 13.611063ms for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.774568   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.793851   67936 pod_ready.go:93] pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.793873   67936 pod_ready.go:82] duration metric: took 19.297089ms for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.793885   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.943096   67936 pod_ready.go:93] pod "kube-proxy-bwb9h" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.943120   67936 pod_ready.go:82] duration metric: took 149.227014ms for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.943129   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:56.154874   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:58.655280   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:57.158586   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:59.159257   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:57.901938   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.401246   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.900950   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.400984   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.401707   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.901455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.901613   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:02.401302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.342426   67936 pod_ready.go:93] pod "kube-scheduler-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:59.342447   67936 pod_ready.go:82] duration metric: took 399.312035ms for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:59.342460   67936 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	I0815 18:38:01.349419   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:03.848558   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:01.154194   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:03.154779   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:01.658502   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:04.158895   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:02.901914   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.401357   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.901258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.400961   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.401852   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.901115   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.401170   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.901694   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:07.401816   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.849586   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:08.349057   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:05.155847   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:07.653607   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:09.654245   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:06.658092   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:08.659361   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:07.900966   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.401136   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.901534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.400982   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.901126   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.401120   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.901175   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.401704   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.901710   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:12.401712   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.349443   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:12.349942   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:11.655212   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:14.154508   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:11.158562   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:13.657985   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:15.658088   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:12.901680   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.401532   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.901198   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:13.901295   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:13.938743   68713 cri.go:89] found id: ""
	I0815 18:38:13.938770   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.938778   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:13.938786   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:13.938843   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:13.971997   68713 cri.go:89] found id: ""
	I0815 18:38:13.972029   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.972041   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:13.972048   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:13.972111   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:14.006793   68713 cri.go:89] found id: ""
	I0815 18:38:14.006825   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.006836   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:14.006844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:14.006903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:14.041546   68713 cri.go:89] found id: ""
	I0815 18:38:14.041575   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.041587   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:14.041595   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:14.041680   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:14.077614   68713 cri.go:89] found id: ""
	I0815 18:38:14.077639   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.077648   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:14.077653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:14.077704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:14.113683   68713 cri.go:89] found id: ""
	I0815 18:38:14.113711   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.113721   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:14.113730   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:14.113790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:14.149581   68713 cri.go:89] found id: ""
	I0815 18:38:14.149608   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.149616   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:14.149622   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:14.149678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:14.191576   68713 cri.go:89] found id: ""
	I0815 18:38:14.191606   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.191614   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:14.191622   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:14.191635   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:14.243253   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:14.243287   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:14.256818   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:14.256845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:14.382914   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:14.382933   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:14.382948   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:14.461826   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:14.461859   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.005615   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:17.020977   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:17.021042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:17.070191   68713 cri.go:89] found id: ""
	I0815 18:38:17.070220   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.070232   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:17.070239   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:17.070301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:17.118582   68713 cri.go:89] found id: ""
	I0815 18:38:17.118612   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.118624   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:17.118631   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:17.118693   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:17.165380   68713 cri.go:89] found id: ""
	I0815 18:38:17.165404   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.165413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:17.165421   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:17.165483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:17.204630   68713 cri.go:89] found id: ""
	I0815 18:38:17.204660   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.204670   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:17.204678   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:17.204740   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:17.239182   68713 cri.go:89] found id: ""
	I0815 18:38:17.239210   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.239219   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:17.239226   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:17.239285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:17.276329   68713 cri.go:89] found id: ""
	I0815 18:38:17.276356   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.276367   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:17.276375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:17.276472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:17.312387   68713 cri.go:89] found id: ""
	I0815 18:38:17.312418   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.312427   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:17.312433   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:17.312485   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:17.348277   68713 cri.go:89] found id: ""
	I0815 18:38:17.348300   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.348308   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:17.348315   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:17.348334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:17.424886   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:17.424924   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.465491   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:17.465518   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:17.517687   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:17.517719   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:17.531928   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:17.531959   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:17.606987   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:14.849001   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:17.349912   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:16.155496   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:18.653621   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:18.159850   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.658717   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.107740   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:20.123194   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:20.123255   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:20.163586   68713 cri.go:89] found id: ""
	I0815 18:38:20.163608   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.163619   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:20.163627   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:20.163676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:20.200171   68713 cri.go:89] found id: ""
	I0815 18:38:20.200196   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.200204   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:20.200210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:20.200270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:20.234739   68713 cri.go:89] found id: ""
	I0815 18:38:20.234770   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.234781   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:20.234788   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:20.234849   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:20.270182   68713 cri.go:89] found id: ""
	I0815 18:38:20.270206   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.270215   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:20.270220   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:20.270281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:20.303643   68713 cri.go:89] found id: ""
	I0815 18:38:20.303672   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.303682   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:20.303690   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:20.303748   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:20.339399   68713 cri.go:89] found id: ""
	I0815 18:38:20.339431   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.339441   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:20.339449   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:20.339511   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:20.377220   68713 cri.go:89] found id: ""
	I0815 18:38:20.377245   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.377252   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:20.377258   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:20.377310   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:20.411202   68713 cri.go:89] found id: ""
	I0815 18:38:20.411238   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.411249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:20.411268   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:20.411282   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:20.462846   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:20.462879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:20.476569   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:20.476597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:20.554243   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:20.554269   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:20.554285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:20.637450   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:20.637493   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:19.849194   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:21.849502   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.655378   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.154633   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.160747   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:25.658706   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.182633   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:23.196953   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:23.197026   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:23.232011   68713 cri.go:89] found id: ""
	I0815 18:38:23.232039   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.232051   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:23.232064   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:23.232114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:23.266963   68713 cri.go:89] found id: ""
	I0815 18:38:23.266992   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.267000   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:23.267006   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:23.267069   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:23.306473   68713 cri.go:89] found id: ""
	I0815 18:38:23.306496   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.306504   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:23.306510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:23.306574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:23.343542   68713 cri.go:89] found id: ""
	I0815 18:38:23.343574   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.343585   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:23.343592   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:23.343652   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:23.382468   68713 cri.go:89] found id: ""
	I0815 18:38:23.382527   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.382539   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:23.382547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:23.382612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:23.418857   68713 cri.go:89] found id: ""
	I0815 18:38:23.418882   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.418891   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:23.418897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:23.418956   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:23.460971   68713 cri.go:89] found id: ""
	I0815 18:38:23.461004   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.461016   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:23.461023   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:23.461100   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:23.494139   68713 cri.go:89] found id: ""
	I0815 18:38:23.494172   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.494183   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:23.494194   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:23.494208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:23.547874   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:23.547908   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:23.562251   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:23.562278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:23.636503   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:23.636528   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:23.636545   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:23.716020   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:23.716051   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.255081   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:26.270118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:26.270184   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:26.308586   68713 cri.go:89] found id: ""
	I0815 18:38:26.308612   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.308623   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:26.308630   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:26.308688   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:26.344364   68713 cri.go:89] found id: ""
	I0815 18:38:26.344394   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.344410   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:26.344418   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:26.344533   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:26.381621   68713 cri.go:89] found id: ""
	I0815 18:38:26.381642   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.381650   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:26.381655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:26.381699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:26.416091   68713 cri.go:89] found id: ""
	I0815 18:38:26.416118   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.416128   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:26.416136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:26.416195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:26.456038   68713 cri.go:89] found id: ""
	I0815 18:38:26.456068   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.456080   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:26.456088   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:26.456151   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:26.490728   68713 cri.go:89] found id: ""
	I0815 18:38:26.490758   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.490769   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:26.490776   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:26.490837   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:26.529388   68713 cri.go:89] found id: ""
	I0815 18:38:26.529422   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.529434   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:26.529440   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:26.529489   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:26.567452   68713 cri.go:89] found id: ""
	I0815 18:38:26.567475   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.567484   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:26.567491   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:26.567503   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:26.641841   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:26.641863   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:26.641879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:26.719403   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:26.719438   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.760460   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:26.760507   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:26.814450   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:26.814480   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:24.349319   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:26.850207   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:25.155213   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:27.654265   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:29.656816   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:27.663849   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:30.158417   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:29.329451   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:29.344634   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:29.344706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:29.379278   68713 cri.go:89] found id: ""
	I0815 18:38:29.379308   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.379319   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:29.379326   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:29.379385   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:29.411854   68713 cri.go:89] found id: ""
	I0815 18:38:29.411881   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.411891   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:29.411898   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:29.411965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:29.443975   68713 cri.go:89] found id: ""
	I0815 18:38:29.444004   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.444014   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:29.444022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:29.444081   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:29.477919   68713 cri.go:89] found id: ""
	I0815 18:38:29.477944   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.477954   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:29.477962   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:29.478020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:29.518944   68713 cri.go:89] found id: ""
	I0815 18:38:29.518973   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.518985   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:29.518992   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:29.519052   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:29.553876   68713 cri.go:89] found id: ""
	I0815 18:38:29.553903   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.553913   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:29.553921   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:29.553974   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:29.590768   68713 cri.go:89] found id: ""
	I0815 18:38:29.590804   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.590815   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:29.590823   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:29.590879   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:29.625553   68713 cri.go:89] found id: ""
	I0815 18:38:29.625578   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.625586   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:29.625595   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:29.625606   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:29.668447   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:29.668478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:29.721002   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:29.721035   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:29.734955   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:29.734983   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:29.808703   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:29.808726   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:29.808742   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.397781   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:32.413876   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:32.413937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:32.453689   68713 cri.go:89] found id: ""
	I0815 18:38:32.453720   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.453777   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:32.453791   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:32.453839   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:32.490529   68713 cri.go:89] found id: ""
	I0815 18:38:32.490559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.490567   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:32.490573   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:32.490622   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:32.527680   68713 cri.go:89] found id: ""
	I0815 18:38:32.527710   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.527720   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:32.527727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:32.527790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:32.564619   68713 cri.go:89] found id: ""
	I0815 18:38:32.564656   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.564667   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:32.564677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:32.564745   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:32.600530   68713 cri.go:89] found id: ""
	I0815 18:38:32.600559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.600570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:32.600577   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:32.600639   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:32.636779   68713 cri.go:89] found id: ""
	I0815 18:38:32.636813   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.636821   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:32.636828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:32.636897   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:32.673743   68713 cri.go:89] found id: ""
	I0815 18:38:32.673774   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.673786   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:32.673794   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:32.673853   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:32.709678   68713 cri.go:89] found id: ""
	I0815 18:38:32.709708   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.709719   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:32.709730   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:32.709744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.785961   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:32.785998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:29.349763   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:31.350398   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:33.848873   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.155992   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:34.654825   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.159855   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:34.657783   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.828205   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:32.828237   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:32.894624   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:32.894666   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:32.910742   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:32.910769   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:32.980853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:35.481438   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:35.495373   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:35.495444   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:35.529184   68713 cri.go:89] found id: ""
	I0815 18:38:35.529212   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.529221   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:35.529226   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:35.529275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:35.565188   68713 cri.go:89] found id: ""
	I0815 18:38:35.565214   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.565221   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:35.565227   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:35.565281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:35.600386   68713 cri.go:89] found id: ""
	I0815 18:38:35.600416   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.600428   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:35.600435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:35.600519   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:35.634255   68713 cri.go:89] found id: ""
	I0815 18:38:35.634278   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.634287   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:35.634293   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:35.634339   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:35.670236   68713 cri.go:89] found id: ""
	I0815 18:38:35.670260   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.670268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:35.670273   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:35.670354   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:35.707691   68713 cri.go:89] found id: ""
	I0815 18:38:35.707714   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.707722   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:35.707727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:35.707782   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:35.745791   68713 cri.go:89] found id: ""
	I0815 18:38:35.745820   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.745832   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:35.745844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:35.745916   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:35.784167   68713 cri.go:89] found id: ""
	I0815 18:38:35.784195   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.784205   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:35.784217   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:35.784234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:35.864681   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:35.864711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:35.906831   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:35.906858   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:35.960328   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:35.960366   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:35.974401   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:35.974428   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:36.044789   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:35.849744   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:38.348058   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:36.654916   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:39.155585   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:36.658767   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:39.159236   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:38.545951   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:38.561473   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:38.561540   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:38.597621   68713 cri.go:89] found id: ""
	I0815 18:38:38.597658   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.597668   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:38.597679   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:38.597756   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:38.632081   68713 cri.go:89] found id: ""
	I0815 18:38:38.632115   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.632127   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:38.632135   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:38.632203   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:38.669917   68713 cri.go:89] found id: ""
	I0815 18:38:38.669944   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.669952   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:38.669958   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:38.670015   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:38.707552   68713 cri.go:89] found id: ""
	I0815 18:38:38.707574   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.707582   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:38.707588   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:38.707642   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:38.746054   68713 cri.go:89] found id: ""
	I0815 18:38:38.746082   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.746093   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:38.746101   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:38.746166   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:38.783901   68713 cri.go:89] found id: ""
	I0815 18:38:38.783933   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.783945   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:38.783952   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:38.784018   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:38.825411   68713 cri.go:89] found id: ""
	I0815 18:38:38.825441   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.825452   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:38.825459   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:38.825520   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:38.863174   68713 cri.go:89] found id: ""
	I0815 18:38:38.863219   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.863231   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:38.863241   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:38.863254   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:38.914016   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:38.914045   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:38.927634   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:38.927659   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:38.993380   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:38.993407   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:38.993422   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:39.077075   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:39.077116   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:41.620219   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:41.633572   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:41.633628   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:41.670330   68713 cri.go:89] found id: ""
	I0815 18:38:41.670351   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.670358   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:41.670364   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:41.670418   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:41.706467   68713 cri.go:89] found id: ""
	I0815 18:38:41.706494   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.706502   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:41.706508   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:41.706564   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:41.742915   68713 cri.go:89] found id: ""
	I0815 18:38:41.742958   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.742970   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:41.742978   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:41.743044   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:41.778650   68713 cri.go:89] found id: ""
	I0815 18:38:41.778679   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.778687   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:41.778692   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:41.778739   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:41.813329   68713 cri.go:89] found id: ""
	I0815 18:38:41.813358   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.813369   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:41.813375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:41.813427   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:41.851351   68713 cri.go:89] found id: ""
	I0815 18:38:41.851383   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.851391   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:41.851398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:41.851460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:41.895097   68713 cri.go:89] found id: ""
	I0815 18:38:41.895130   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.895142   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:41.895150   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:41.895209   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:41.931306   68713 cri.go:89] found id: ""
	I0815 18:38:41.931336   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.931353   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:41.931365   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:41.931381   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:41.944796   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:41.944828   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:42.018868   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:42.018891   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:42.018903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:42.104304   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:42.104340   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:42.143625   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:42.143655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:40.349197   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:42.850034   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:41.655478   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:44.155025   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:41.159976   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:43.658013   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:45.658358   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:44.698568   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:44.712171   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:44.712247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:44.747043   68713 cri.go:89] found id: ""
	I0815 18:38:44.747071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.747079   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:44.747085   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:44.747143   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:44.782660   68713 cri.go:89] found id: ""
	I0815 18:38:44.782691   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.782703   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:44.782711   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:44.782765   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:44.821111   68713 cri.go:89] found id: ""
	I0815 18:38:44.821138   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.821146   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:44.821152   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:44.821222   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:44.859602   68713 cri.go:89] found id: ""
	I0815 18:38:44.859635   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.859647   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:44.859655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:44.859717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:44.895037   68713 cri.go:89] found id: ""
	I0815 18:38:44.895071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.895083   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:44.895090   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:44.895175   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:44.928729   68713 cri.go:89] found id: ""
	I0815 18:38:44.928759   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.928771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:44.928781   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:44.928844   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:44.963945   68713 cri.go:89] found id: ""
	I0815 18:38:44.963977   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.963987   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:44.963996   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:44.964060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:45.001166   68713 cri.go:89] found id: ""
	I0815 18:38:45.001195   68713 logs.go:276] 0 containers: []
	W0815 18:38:45.001206   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:45.001218   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:45.001234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:45.015181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:45.015209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:45.084297   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:45.084322   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:45.084334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:45.173833   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:45.173866   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:45.211863   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:45.211899   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:47.771009   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:47.784865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:47.784926   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:44.850332   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.347985   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:46.654797   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:48.654936   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.658823   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:50.178115   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.818497   68713 cri.go:89] found id: ""
	I0815 18:38:47.818526   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.818538   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:47.818545   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:47.818608   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:47.857900   68713 cri.go:89] found id: ""
	I0815 18:38:47.857927   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.857935   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:47.857941   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:47.857997   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:47.895778   68713 cri.go:89] found id: ""
	I0815 18:38:47.895809   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.895822   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:47.895829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:47.895887   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:47.937410   68713 cri.go:89] found id: ""
	I0815 18:38:47.937434   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.937442   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:47.937448   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:47.937505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:47.976414   68713 cri.go:89] found id: ""
	I0815 18:38:47.976442   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.976450   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:47.976455   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:47.976525   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:48.014863   68713 cri.go:89] found id: ""
	I0815 18:38:48.014891   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.014899   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:48.014906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:48.014969   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:48.053508   68713 cri.go:89] found id: ""
	I0815 18:38:48.053536   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.053546   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:48.053554   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:48.053624   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:48.088900   68713 cri.go:89] found id: ""
	I0815 18:38:48.088931   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.088943   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:48.088954   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:48.088969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:48.140415   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:48.140447   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:48.155958   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:48.155985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:48.229338   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:48.229368   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:48.229383   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:48.317776   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:48.317814   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:50.860592   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:50.877070   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:50.877154   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:50.937590   68713 cri.go:89] found id: ""
	I0815 18:38:50.937615   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.937622   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:50.937628   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:50.937687   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:50.972573   68713 cri.go:89] found id: ""
	I0815 18:38:50.972603   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.972614   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:50.972622   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:50.972683   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:51.008786   68713 cri.go:89] found id: ""
	I0815 18:38:51.008811   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.008820   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:51.008826   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:51.008875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:51.043076   68713 cri.go:89] found id: ""
	I0815 18:38:51.043105   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.043116   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:51.043123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:51.043186   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:51.078344   68713 cri.go:89] found id: ""
	I0815 18:38:51.078379   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.078391   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:51.078398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:51.078453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:51.114494   68713 cri.go:89] found id: ""
	I0815 18:38:51.114521   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.114532   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:51.114540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:51.114600   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:51.153871   68713 cri.go:89] found id: ""
	I0815 18:38:51.153898   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.153909   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:51.153917   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:51.153984   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:51.187908   68713 cri.go:89] found id: ""
	I0815 18:38:51.187937   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.187948   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:51.187959   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:51.187974   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:51.264172   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:51.264198   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:51.264214   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:51.345238   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:51.345285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:51.385720   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:51.385745   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:51.443313   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:51.443353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:49.849156   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:52.348628   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:51.154188   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:53.155256   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:52.658773   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:54.659127   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:53.959176   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:53.972031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:53.972101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:54.010673   68713 cri.go:89] found id: ""
	I0815 18:38:54.010699   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.010707   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:54.010714   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:54.010775   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:54.045632   68713 cri.go:89] found id: ""
	I0815 18:38:54.045662   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.045672   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:54.045678   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:54.045727   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:54.082111   68713 cri.go:89] found id: ""
	I0815 18:38:54.082134   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.082142   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:54.082148   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:54.082206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:54.118210   68713 cri.go:89] found id: ""
	I0815 18:38:54.118232   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.118239   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:54.118246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:54.118305   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:54.155474   68713 cri.go:89] found id: ""
	I0815 18:38:54.155499   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.155508   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:54.155515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:54.155572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:54.193263   68713 cri.go:89] found id: ""
	I0815 18:38:54.193298   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.193305   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:54.193311   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:54.193365   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:54.233389   68713 cri.go:89] found id: ""
	I0815 18:38:54.233416   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.233428   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:54.233435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:54.233502   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:54.266127   68713 cri.go:89] found id: ""
	I0815 18:38:54.266155   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.266164   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:54.266176   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:54.266199   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:54.318724   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:54.318762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:54.332993   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:54.333022   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:54.405895   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:54.405915   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:54.405926   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:54.485819   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:54.485875   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.024956   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:57.038182   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:57.038246   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:57.078020   68713 cri.go:89] found id: ""
	I0815 18:38:57.078044   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.078055   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:57.078063   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:57.078127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:57.115077   68713 cri.go:89] found id: ""
	I0815 18:38:57.115101   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.115110   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:57.115118   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:57.115179   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:57.152711   68713 cri.go:89] found id: ""
	I0815 18:38:57.152737   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.152747   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:57.152755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:57.152819   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:57.190000   68713 cri.go:89] found id: ""
	I0815 18:38:57.190034   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.190042   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:57.190048   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:57.190096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:57.224947   68713 cri.go:89] found id: ""
	I0815 18:38:57.224978   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.224990   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:57.224998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:57.225060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:57.262329   68713 cri.go:89] found id: ""
	I0815 18:38:57.262365   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.262375   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:57.262383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:57.262458   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:57.299471   68713 cri.go:89] found id: ""
	I0815 18:38:57.299498   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.299507   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:57.299513   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:57.299572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:57.357163   68713 cri.go:89] found id: ""
	I0815 18:38:57.357202   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.357211   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:57.357220   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:57.357236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.405154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:57.405184   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:57.459245   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:57.459277   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:57.473663   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:57.473699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:57.546670   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:57.546699   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:57.546715   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:54.348864   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:56.848276   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:58.849461   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:55.655015   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:58.158306   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:56.662722   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:59.159559   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:00.124455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:00.137985   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:00.138053   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:00.175201   68713 cri.go:89] found id: ""
	I0815 18:39:00.175231   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.175242   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:00.175250   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:00.175328   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:00.209376   68713 cri.go:89] found id: ""
	I0815 18:39:00.209406   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.209418   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:00.209426   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:00.209484   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:00.246860   68713 cri.go:89] found id: ""
	I0815 18:39:00.246889   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.246899   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:00.246906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:00.246965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:00.282787   68713 cri.go:89] found id: ""
	I0815 18:39:00.282814   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.282823   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:00.282829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:00.282875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:00.330227   68713 cri.go:89] found id: ""
	I0815 18:39:00.330256   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.330268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:00.330275   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:00.330338   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:00.363028   68713 cri.go:89] found id: ""
	I0815 18:39:00.363061   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.363072   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:00.363079   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:00.363169   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:00.400484   68713 cri.go:89] found id: ""
	I0815 18:39:00.400522   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.400533   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:00.400540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:00.400597   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:00.436187   68713 cri.go:89] found id: ""
	I0815 18:39:00.436225   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.436238   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:00.436252   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:00.436267   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:00.481960   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:00.481985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:00.535103   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:00.535138   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:00.548541   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:00.548568   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:00.619476   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:00.619505   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:00.619525   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:01.347916   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.349448   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:00.654384   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.155048   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:01.658374   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.658824   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.206473   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:03.222893   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:03.222967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:03.272249   68713 cri.go:89] found id: ""
	I0815 18:39:03.272275   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.272283   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:03.272291   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:03.272355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:03.336786   68713 cri.go:89] found id: ""
	I0815 18:39:03.336811   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.336819   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:03.336825   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:03.336884   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:03.375977   68713 cri.go:89] found id: ""
	I0815 18:39:03.376002   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.376011   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:03.376016   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:03.376063   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:03.410304   68713 cri.go:89] found id: ""
	I0815 18:39:03.410326   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.410335   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:03.410340   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:03.410403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:03.446147   68713 cri.go:89] found id: ""
	I0815 18:39:03.446176   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.446188   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:03.446195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:03.446256   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:03.482555   68713 cri.go:89] found id: ""
	I0815 18:39:03.482582   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.482591   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:03.482597   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:03.482654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:03.519477   68713 cri.go:89] found id: ""
	I0815 18:39:03.519503   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.519511   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:03.519517   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:03.519574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:03.556539   68713 cri.go:89] found id: ""
	I0815 18:39:03.556566   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.556577   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:03.556587   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:03.556602   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:03.610553   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:03.610593   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:03.625799   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:03.625827   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:03.697106   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:03.697132   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:03.697149   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:03.779089   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:03.779120   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:06.319280   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:06.333284   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:06.333355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:06.369696   68713 cri.go:89] found id: ""
	I0815 18:39:06.369719   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.369727   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:06.369732   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:06.369780   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:06.405023   68713 cri.go:89] found id: ""
	I0815 18:39:06.405046   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.405053   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:06.405059   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:06.405113   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:06.439948   68713 cri.go:89] found id: ""
	I0815 18:39:06.439974   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.439983   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:06.439989   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:06.440048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:06.475613   68713 cri.go:89] found id: ""
	I0815 18:39:06.475642   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.475654   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:06.475664   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:06.475723   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:06.510698   68713 cri.go:89] found id: ""
	I0815 18:39:06.510721   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.510729   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:06.510735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:06.510783   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:06.545831   68713 cri.go:89] found id: ""
	I0815 18:39:06.545861   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.545873   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:06.545880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:06.545940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:06.579027   68713 cri.go:89] found id: ""
	I0815 18:39:06.579053   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.579064   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:06.579072   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:06.579132   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:06.615308   68713 cri.go:89] found id: ""
	I0815 18:39:06.615339   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.615352   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:06.615371   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:06.615396   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:06.671523   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:06.671555   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:06.685556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:06.685580   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:06.765036   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:06.765059   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:06.765071   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:06.843412   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:06.843457   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:05.849018   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:07.849342   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:05.654854   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:07.654910   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:09.655240   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:06.158409   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:08.657902   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:10.658258   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:09.390799   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:09.404099   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:09.404160   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:09.439534   68713 cri.go:89] found id: ""
	I0815 18:39:09.439563   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.439582   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:09.439591   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:09.439654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:09.478933   68713 cri.go:89] found id: ""
	I0815 18:39:09.478963   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.478974   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:09.478982   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:09.479042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:09.514396   68713 cri.go:89] found id: ""
	I0815 18:39:09.514425   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.514436   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:09.514444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:09.514510   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:09.547749   68713 cri.go:89] found id: ""
	I0815 18:39:09.547775   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.547785   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:09.547793   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:09.547856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:09.583583   68713 cri.go:89] found id: ""
	I0815 18:39:09.583611   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.583623   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:09.583631   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:09.583695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:09.616530   68713 cri.go:89] found id: ""
	I0815 18:39:09.616560   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.616570   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:09.616576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:09.616641   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:09.655167   68713 cri.go:89] found id: ""
	I0815 18:39:09.655189   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.655199   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:09.655207   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:09.655263   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:09.691368   68713 cri.go:89] found id: ""
	I0815 18:39:09.691391   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.691398   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:09.691411   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:09.691426   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:09.740739   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:09.740770   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:09.755049   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:09.755074   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:09.825053   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:09.825080   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:09.825095   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:09.903036   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:09.903076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:12.441898   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:12.454637   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:12.454712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:12.494604   68713 cri.go:89] found id: ""
	I0815 18:39:12.494632   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.494640   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:12.494646   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:12.494699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:12.531587   68713 cri.go:89] found id: ""
	I0815 18:39:12.531631   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.531642   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:12.531649   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:12.531710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:12.564991   68713 cri.go:89] found id: ""
	I0815 18:39:12.565014   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.565021   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:12.565027   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:12.565096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:12.600667   68713 cri.go:89] found id: ""
	I0815 18:39:12.600698   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.600709   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:12.600715   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:12.600777   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:12.633658   68713 cri.go:89] found id: ""
	I0815 18:39:12.633681   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.633691   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:12.633698   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:12.633759   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:12.673709   68713 cri.go:89] found id: ""
	I0815 18:39:12.673730   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.673737   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:12.673743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:12.673790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:12.707353   68713 cri.go:89] found id: ""
	I0815 18:39:12.707378   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.707385   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:12.707390   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:12.707437   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:12.746926   68713 cri.go:89] found id: ""
	I0815 18:39:12.746949   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.746957   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:12.746965   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:12.746977   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:09.853116   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:12.348297   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:11.655347   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:14.154929   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:13.158257   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:15.158457   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:12.792154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:12.792180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:12.843933   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:12.843968   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:12.859583   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:12.859609   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:12.940856   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:12.940880   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:12.940895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:15.520265   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:15.533677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:15.533754   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:15.572109   68713 cri.go:89] found id: ""
	I0815 18:39:15.572135   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.572145   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:15.572153   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:15.572221   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:15.607442   68713 cri.go:89] found id: ""
	I0815 18:39:15.607472   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.607484   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:15.607492   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:15.607551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:15.642206   68713 cri.go:89] found id: ""
	I0815 18:39:15.642230   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.642238   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:15.642246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:15.642308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:15.677914   68713 cri.go:89] found id: ""
	I0815 18:39:15.677945   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.677956   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:15.677963   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:15.678049   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:15.714466   68713 cri.go:89] found id: ""
	I0815 18:39:15.714496   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.714504   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:15.714510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:15.714563   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:15.750961   68713 cri.go:89] found id: ""
	I0815 18:39:15.750987   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.750995   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:15.751002   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:15.751050   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:15.785399   68713 cri.go:89] found id: ""
	I0815 18:39:15.785434   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.785444   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:15.785450   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:15.785498   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:15.821547   68713 cri.go:89] found id: ""
	I0815 18:39:15.821571   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.821578   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:15.821586   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:15.821597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:15.875299   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:15.875329   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:15.890376   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:15.890408   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:15.957317   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:15.957337   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:15.957352   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:16.033952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:16.033997   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:14.349171   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:16.849292   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.850822   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:16.654572   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.656041   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:17.657984   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:19.658366   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.571953   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:18.584652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:18.584721   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:18.617043   68713 cri.go:89] found id: ""
	I0815 18:39:18.617066   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.617073   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:18.617079   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:18.617127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:18.651080   68713 cri.go:89] found id: ""
	I0815 18:39:18.651112   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.651123   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:18.651130   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:18.651187   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:18.686857   68713 cri.go:89] found id: ""
	I0815 18:39:18.686890   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.686901   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:18.686909   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:18.686975   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:18.719397   68713 cri.go:89] found id: ""
	I0815 18:39:18.719434   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.719444   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:18.719452   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:18.719521   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:18.758316   68713 cri.go:89] found id: ""
	I0815 18:39:18.758349   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.758360   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:18.758366   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:18.758435   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:18.791586   68713 cri.go:89] found id: ""
	I0815 18:39:18.791609   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.791617   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:18.791623   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:18.791690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:18.827905   68713 cri.go:89] found id: ""
	I0815 18:39:18.827929   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.827937   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:18.827945   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:18.828004   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:18.869371   68713 cri.go:89] found id: ""
	I0815 18:39:18.869404   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.869412   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:18.869420   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:18.869432   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:18.920124   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:18.920158   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:18.936137   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:18.936168   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:19.006877   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:19.006902   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:19.006913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:19.088909   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:19.088953   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:21.632734   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:21.647246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:21.647322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:21.685574   68713 cri.go:89] found id: ""
	I0815 18:39:21.685606   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.685614   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:21.685620   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:21.685676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:21.717073   68713 cri.go:89] found id: ""
	I0815 18:39:21.717112   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.717124   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:21.717133   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:21.717205   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:21.752072   68713 cri.go:89] found id: ""
	I0815 18:39:21.752101   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.752112   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:21.752120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:21.752182   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:21.786811   68713 cri.go:89] found id: ""
	I0815 18:39:21.786834   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.786842   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:21.786848   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:21.786893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:21.823694   68713 cri.go:89] found id: ""
	I0815 18:39:21.823719   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.823728   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:21.823733   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:21.823790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:21.859358   68713 cri.go:89] found id: ""
	I0815 18:39:21.859387   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.859398   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:21.859405   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:21.859469   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:21.893723   68713 cri.go:89] found id: ""
	I0815 18:39:21.893751   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.893761   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:21.893769   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:21.893826   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:21.929338   68713 cri.go:89] found id: ""
	I0815 18:39:21.929368   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.929379   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:21.929388   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:21.929414   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:21.979107   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:21.979141   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:21.993968   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:21.994005   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:22.063359   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:22.063384   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:22.063398   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:22.144303   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:22.144337   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:21.348202   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.349199   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:21.154244   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.155954   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:21.658572   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.658782   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:25.658946   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:24.688104   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:24.701230   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:24.701298   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:24.735056   68713 cri.go:89] found id: ""
	I0815 18:39:24.735086   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.735097   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:24.735104   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:24.735172   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:24.769704   68713 cri.go:89] found id: ""
	I0815 18:39:24.769732   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.769743   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:24.769751   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:24.769812   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:24.808763   68713 cri.go:89] found id: ""
	I0815 18:39:24.808788   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.808796   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:24.808807   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:24.808856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:24.846997   68713 cri.go:89] found id: ""
	I0815 18:39:24.847028   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.847038   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:24.847045   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:24.847106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:24.881681   68713 cri.go:89] found id: ""
	I0815 18:39:24.881705   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.881713   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:24.881719   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:24.881772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:24.917000   68713 cri.go:89] found id: ""
	I0815 18:39:24.917024   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.917032   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:24.917040   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:24.917088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:24.951133   68713 cri.go:89] found id: ""
	I0815 18:39:24.951156   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.951164   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:24.951170   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:24.951218   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:24.987306   68713 cri.go:89] found id: ""
	I0815 18:39:24.987331   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.987339   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:24.987347   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:24.987360   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:25.039533   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:25.039566   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:25.053011   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:25.053036   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:25.125864   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:25.125884   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:25.125895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:25.209885   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:25.209916   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:27.751681   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:27.765316   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:27.765390   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:25.848840   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.849344   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:25.156068   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.654722   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:28.158317   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:30.158632   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.805820   68713 cri.go:89] found id: ""
	I0815 18:39:27.805858   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.805870   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:27.805878   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:27.805940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:27.846684   68713 cri.go:89] found id: ""
	I0815 18:39:27.846717   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.846727   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:27.846737   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:27.846801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:27.882326   68713 cri.go:89] found id: ""
	I0815 18:39:27.882358   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.882370   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:27.882378   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:27.882448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:27.917340   68713 cri.go:89] found id: ""
	I0815 18:39:27.917416   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.917431   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:27.917442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:27.917505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:27.952674   68713 cri.go:89] found id: ""
	I0815 18:39:27.952700   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.952708   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:27.952714   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:27.952763   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:27.986103   68713 cri.go:89] found id: ""
	I0815 18:39:27.986132   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.986143   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:27.986151   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:27.986212   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:28.023674   68713 cri.go:89] found id: ""
	I0815 18:39:28.023716   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.023735   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:28.023742   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:28.023807   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:28.064902   68713 cri.go:89] found id: ""
	I0815 18:39:28.064929   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.064937   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:28.064945   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:28.064957   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:28.116145   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:28.116180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:28.130435   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:28.130462   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:28.204899   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:28.204920   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:28.204931   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:28.284165   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:28.284202   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:30.824135   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:30.837515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:30.837583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:30.874671   68713 cri.go:89] found id: ""
	I0815 18:39:30.874695   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.874705   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:30.874712   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:30.874776   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:30.909990   68713 cri.go:89] found id: ""
	I0815 18:39:30.910027   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.910038   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:30.910045   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:30.910106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:30.946824   68713 cri.go:89] found id: ""
	I0815 18:39:30.946851   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.946859   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:30.946865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:30.946912   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:30.983392   68713 cri.go:89] found id: ""
	I0815 18:39:30.983419   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.983429   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:30.983437   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:30.983505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:31.023471   68713 cri.go:89] found id: ""
	I0815 18:39:31.023500   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.023510   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:31.023518   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:31.023583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:31.063586   68713 cri.go:89] found id: ""
	I0815 18:39:31.063616   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.063627   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:31.063636   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:31.063696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:31.103147   68713 cri.go:89] found id: ""
	I0815 18:39:31.103173   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.103180   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:31.103186   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:31.103237   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:31.144082   68713 cri.go:89] found id: ""
	I0815 18:39:31.144113   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.144124   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:31.144136   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:31.144150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:31.212535   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:31.212563   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:31.212586   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:31.292039   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:31.292076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:31.335023   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:31.335050   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:31.388817   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:31.388853   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:30.349110   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.349209   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:30.154683   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.653806   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:34.654716   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.658245   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:34.659119   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:33.925861   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:33.939604   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:33.939668   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:33.974538   68713 cri.go:89] found id: ""
	I0815 18:39:33.974563   68713 logs.go:276] 0 containers: []
	W0815 18:39:33.974571   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:33.974577   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:33.974634   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:34.009017   68713 cri.go:89] found id: ""
	I0815 18:39:34.009048   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.009058   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:34.009064   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:34.009120   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:34.049478   68713 cri.go:89] found id: ""
	I0815 18:39:34.049501   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.049517   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:34.049523   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:34.049576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:34.091011   68713 cri.go:89] found id: ""
	I0815 18:39:34.091040   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.091050   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:34.091056   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:34.091114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:34.126617   68713 cri.go:89] found id: ""
	I0815 18:39:34.126640   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.126650   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:34.126657   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:34.126706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:34.168140   68713 cri.go:89] found id: ""
	I0815 18:39:34.168169   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.168179   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:34.168187   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:34.168279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:34.205052   68713 cri.go:89] found id: ""
	I0815 18:39:34.205081   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.205091   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:34.205098   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:34.205173   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:34.238474   68713 cri.go:89] found id: ""
	I0815 18:39:34.238499   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.238506   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:34.238521   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:34.238540   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:34.280574   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:34.280601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:34.332662   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:34.332704   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:34.348556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:34.348591   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:34.421428   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:34.421450   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:34.421464   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.004855   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:37.019306   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:37.019378   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:37.057588   68713 cri.go:89] found id: ""
	I0815 18:39:37.057618   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.057626   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:37.057641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:37.057706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:37.095645   68713 cri.go:89] found id: ""
	I0815 18:39:37.095678   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.095687   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:37.095693   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:37.095750   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:37.131669   68713 cri.go:89] found id: ""
	I0815 18:39:37.131696   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.131711   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:37.131717   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:37.131772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:37.185065   68713 cri.go:89] found id: ""
	I0815 18:39:37.185097   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.185108   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:37.185115   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:37.185180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:37.220220   68713 cri.go:89] found id: ""
	I0815 18:39:37.220251   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.220262   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:37.220269   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:37.220322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:37.259816   68713 cri.go:89] found id: ""
	I0815 18:39:37.259849   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.259859   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:37.259868   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:37.259920   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:37.292777   68713 cri.go:89] found id: ""
	I0815 18:39:37.292807   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.292818   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:37.292825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:37.292888   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:37.328673   68713 cri.go:89] found id: ""
	I0815 18:39:37.328703   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.328714   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:37.328725   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:37.328740   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:37.379131   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:37.379172   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:37.392982   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:37.393017   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:37.470727   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:37.470750   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:37.470766   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.552353   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:37.552386   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:34.849108   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:37.349765   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:36.655101   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:39.154433   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:37.158746   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:39.658907   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:40.094008   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:40.107681   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:40.107753   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:40.142229   68713 cri.go:89] found id: ""
	I0815 18:39:40.142254   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.142264   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:40.142271   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:40.142333   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:40.180622   68713 cri.go:89] found id: ""
	I0815 18:39:40.180650   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.180665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:40.180672   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:40.180733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:40.219085   68713 cri.go:89] found id: ""
	I0815 18:39:40.219113   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.219120   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:40.219126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:40.219174   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:40.254807   68713 cri.go:89] found id: ""
	I0815 18:39:40.254838   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.254850   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:40.254858   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:40.254940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:40.290438   68713 cri.go:89] found id: ""
	I0815 18:39:40.290466   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.290478   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:40.290484   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:40.290547   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:40.326320   68713 cri.go:89] found id: ""
	I0815 18:39:40.326356   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.326364   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:40.326370   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:40.326429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:40.361538   68713 cri.go:89] found id: ""
	I0815 18:39:40.361563   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.361570   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:40.361576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:40.361629   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:40.397275   68713 cri.go:89] found id: ""
	I0815 18:39:40.397304   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.397316   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:40.397326   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:40.397342   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:40.466042   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:40.466064   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:40.466078   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:40.544915   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:40.544951   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:40.584992   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:40.585019   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:40.634792   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:40.634837   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:39.848609   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:41.849831   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:41.655153   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:43.655372   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:42.159650   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:44.658547   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:43.149819   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:43.164578   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:43.164649   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:43.199576   68713 cri.go:89] found id: ""
	I0815 18:39:43.199621   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.199633   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:43.199641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:43.199702   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:43.233783   68713 cri.go:89] found id: ""
	I0815 18:39:43.233820   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.233833   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:43.233840   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:43.233911   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:43.269406   68713 cri.go:89] found id: ""
	I0815 18:39:43.269437   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.269449   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:43.269457   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:43.269570   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:43.310686   68713 cri.go:89] found id: ""
	I0815 18:39:43.310715   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.310726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:43.310734   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:43.310795   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:43.348662   68713 cri.go:89] found id: ""
	I0815 18:39:43.348689   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.348699   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:43.348706   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:43.348767   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:43.385676   68713 cri.go:89] found id: ""
	I0815 18:39:43.385714   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.385726   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:43.385737   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:43.385802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:43.422605   68713 cri.go:89] found id: ""
	I0815 18:39:43.422634   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.422645   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:43.422653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:43.422712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:43.463208   68713 cri.go:89] found id: ""
	I0815 18:39:43.463238   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.463249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:43.463260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:43.463278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:43.476637   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:43.476664   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:43.552239   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:43.552263   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:43.552278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:43.653055   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:43.653108   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:43.699166   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:43.699192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.251725   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:46.265164   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:46.265240   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:46.305095   68713 cri.go:89] found id: ""
	I0815 18:39:46.305123   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.305133   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:46.305140   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:46.305196   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:46.349744   68713 cri.go:89] found id: ""
	I0815 18:39:46.349773   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.349783   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:46.349790   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:46.349858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:46.385807   68713 cri.go:89] found id: ""
	I0815 18:39:46.385839   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.385847   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:46.385853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:46.385908   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:46.419977   68713 cri.go:89] found id: ""
	I0815 18:39:46.420011   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.420024   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:46.420031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:46.420093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:46.454852   68713 cri.go:89] found id: ""
	I0815 18:39:46.454883   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.454894   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:46.454901   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:46.454962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:46.497157   68713 cri.go:89] found id: ""
	I0815 18:39:46.497192   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.497202   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:46.497210   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:46.497271   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:46.530191   68713 cri.go:89] found id: ""
	I0815 18:39:46.530218   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.530226   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:46.530232   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:46.530282   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:46.566024   68713 cri.go:89] found id: ""
	I0815 18:39:46.566050   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.566063   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:46.566074   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:46.566103   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.621969   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:46.622004   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:46.636576   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:46.636603   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:46.706819   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:46.706842   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:46.706857   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:46.786589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:46.786634   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:44.352685   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:46.849090   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:48.849424   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:45.655900   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:48.154862   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:46.658710   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:49.157317   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:49.324853   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:49.343543   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:49.343618   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:49.396260   68713 cri.go:89] found id: ""
	I0815 18:39:49.396292   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.396303   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:49.396311   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:49.396380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:49.437579   68713 cri.go:89] found id: ""
	I0815 18:39:49.437604   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.437612   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:49.437617   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:49.437663   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:49.476206   68713 cri.go:89] found id: ""
	I0815 18:39:49.476232   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.476239   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:49.476245   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:49.476296   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:49.511324   68713 cri.go:89] found id: ""
	I0815 18:39:49.511349   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.511357   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:49.511363   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:49.511428   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:49.545875   68713 cri.go:89] found id: ""
	I0815 18:39:49.545907   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.545916   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:49.545922   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:49.545981   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:49.582176   68713 cri.go:89] found id: ""
	I0815 18:39:49.582204   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.582228   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:49.582246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:49.582309   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:49.623288   68713 cri.go:89] found id: ""
	I0815 18:39:49.623318   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.623327   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:49.623333   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:49.623394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:49.662352   68713 cri.go:89] found id: ""
	I0815 18:39:49.662377   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.662389   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:49.662399   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:49.662424   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:49.745582   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:49.745617   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:49.785256   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:49.785295   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:49.835944   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:49.835979   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:49.852859   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:49.852886   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:49.928427   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.429223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:52.442384   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:52.442460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:52.480515   68713 cri.go:89] found id: ""
	I0815 18:39:52.480543   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.480553   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:52.480558   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:52.480605   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:52.518346   68713 cri.go:89] found id: ""
	I0815 18:39:52.518382   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.518393   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:52.518401   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:52.518460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:52.557696   68713 cri.go:89] found id: ""
	I0815 18:39:52.557722   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.557731   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:52.557736   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:52.557786   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:52.590849   68713 cri.go:89] found id: ""
	I0815 18:39:52.590879   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.590890   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:52.590898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:52.590961   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:52.629950   68713 cri.go:89] found id: ""
	I0815 18:39:52.629980   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.629992   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:52.629999   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:52.630047   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:52.666039   68713 cri.go:89] found id: ""
	I0815 18:39:52.666070   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.666081   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:52.666089   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:52.666146   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:52.699917   68713 cri.go:89] found id: ""
	I0815 18:39:52.699941   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.699949   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:52.699955   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:52.700001   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:52.735944   68713 cri.go:89] found id: ""
	I0815 18:39:52.735973   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.735981   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:52.735989   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:52.736001   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:39:50.849633   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:52.850298   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:50.155118   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:52.155166   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:54.653844   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:51.159401   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:53.658513   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	W0815 18:39:52.805519   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.805537   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:52.805559   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:52.894175   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:52.894213   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:52.932974   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:52.933006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:52.984206   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:52.984244   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.498477   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:55.511319   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:55.511380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:55.544899   68713 cri.go:89] found id: ""
	I0815 18:39:55.544928   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.544936   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:55.544943   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:55.545003   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:55.578821   68713 cri.go:89] found id: ""
	I0815 18:39:55.578855   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.578864   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:55.578869   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:55.578922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:55.615392   68713 cri.go:89] found id: ""
	I0815 18:39:55.615422   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.615434   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:55.615441   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:55.615501   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:55.653456   68713 cri.go:89] found id: ""
	I0815 18:39:55.653482   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.653493   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:55.653500   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:55.653558   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:55.687716   68713 cri.go:89] found id: ""
	I0815 18:39:55.687741   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.687749   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:55.687755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:55.687802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:55.725518   68713 cri.go:89] found id: ""
	I0815 18:39:55.725543   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.725553   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:55.725561   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:55.725631   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:55.758451   68713 cri.go:89] found id: ""
	I0815 18:39:55.758479   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.758490   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:55.758498   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:55.758560   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:55.792653   68713 cri.go:89] found id: ""
	I0815 18:39:55.792680   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.792687   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:55.792699   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:55.792710   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:55.832127   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:55.832156   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:55.885255   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:55.885289   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.898980   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:55.899009   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:55.967579   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:55.967609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:55.967624   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:55.348998   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:57.349656   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:56.654840   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.655471   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:56.158348   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.658194   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:00.658852   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.543524   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:58.556338   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:58.556412   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:58.593359   68713 cri.go:89] found id: ""
	I0815 18:39:58.593390   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.593401   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:58.593409   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:58.593472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:58.628446   68713 cri.go:89] found id: ""
	I0815 18:39:58.628471   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.628481   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:58.628504   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:58.628567   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:58.663930   68713 cri.go:89] found id: ""
	I0815 18:39:58.663954   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.663964   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:58.663971   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:58.664028   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:58.701070   68713 cri.go:89] found id: ""
	I0815 18:39:58.701095   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.701103   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:58.701108   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:58.701156   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:58.734427   68713 cri.go:89] found id: ""
	I0815 18:39:58.734457   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.734468   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:58.734476   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:58.734543   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:58.769121   68713 cri.go:89] found id: ""
	I0815 18:39:58.769144   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.769152   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:58.769162   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:58.769215   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:58.805771   68713 cri.go:89] found id: ""
	I0815 18:39:58.805796   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.805803   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:58.805808   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:58.805856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:58.840288   68713 cri.go:89] found id: ""
	I0815 18:39:58.840315   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.840325   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:58.840336   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:58.840351   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:58.895856   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:58.895893   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:58.909453   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:58.909478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:58.975939   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:58.975960   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:58.975971   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:59.055318   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:59.055353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.595588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:01.608625   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:01.608690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:01.646105   68713 cri.go:89] found id: ""
	I0815 18:40:01.646133   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.646144   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:01.646151   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:01.646214   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:01.685162   68713 cri.go:89] found id: ""
	I0815 18:40:01.685192   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.685202   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:01.685210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:01.685261   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:01.721452   68713 cri.go:89] found id: ""
	I0815 18:40:01.721479   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.721499   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:01.721507   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:01.721576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:01.762288   68713 cri.go:89] found id: ""
	I0815 18:40:01.762318   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.762331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:01.762339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:01.762429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:01.800547   68713 cri.go:89] found id: ""
	I0815 18:40:01.800579   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.800590   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:01.800598   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:01.800660   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:01.839182   68713 cri.go:89] found id: ""
	I0815 18:40:01.839214   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.839223   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:01.839229   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:01.839294   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:01.875364   68713 cri.go:89] found id: ""
	I0815 18:40:01.875390   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.875398   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:01.875404   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:01.875452   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:01.910485   68713 cri.go:89] found id: ""
	I0815 18:40:01.910512   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.910521   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:01.910535   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:01.910547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.951970   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:01.951998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:02.005720   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:02.005764   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:02.020941   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:02.020969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:02.101206   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:02.101224   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:02.101236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:59.850909   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:02.349180   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:00.659366   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:03.153614   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:03.158375   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:05.159868   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:04.687482   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:04.701501   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:04.701562   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:04.739613   68713 cri.go:89] found id: ""
	I0815 18:40:04.739636   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.739644   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:04.739650   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:04.739704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:04.774419   68713 cri.go:89] found id: ""
	I0815 18:40:04.774443   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.774453   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:04.774460   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:04.774522   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:04.809516   68713 cri.go:89] found id: ""
	I0815 18:40:04.809538   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.809547   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:04.809552   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:04.809612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:04.843822   68713 cri.go:89] found id: ""
	I0815 18:40:04.843850   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.843870   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:04.843878   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:04.843942   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:04.883853   68713 cri.go:89] found id: ""
	I0815 18:40:04.883881   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.883892   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:04.883900   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:04.883962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:04.918811   68713 cri.go:89] found id: ""
	I0815 18:40:04.918838   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.918846   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:04.918852   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:04.918903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:04.953076   68713 cri.go:89] found id: ""
	I0815 18:40:04.953101   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.953110   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:04.953116   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:04.953163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:04.988219   68713 cri.go:89] found id: ""
	I0815 18:40:04.988246   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.988255   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:04.988264   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:04.988275   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:05.060859   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:05.060896   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:05.060913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:05.146768   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:05.146817   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:05.187816   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:05.187845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:05.239027   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:05.239067   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:07.754503   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:07.769608   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:07.769695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:04.849108   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:06.850409   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:05.155042   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.654547   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:09.654825   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.658972   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:10.159255   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.804435   68713 cri.go:89] found id: ""
	I0815 18:40:07.804460   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.804468   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:07.804474   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:07.804551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:07.839760   68713 cri.go:89] found id: ""
	I0815 18:40:07.839787   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.839797   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:07.839804   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:07.839868   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:07.877984   68713 cri.go:89] found id: ""
	I0815 18:40:07.878009   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.878017   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:07.878022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:07.878070   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:07.914294   68713 cri.go:89] found id: ""
	I0815 18:40:07.914319   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.914328   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:07.914336   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:07.914395   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:07.948751   68713 cri.go:89] found id: ""
	I0815 18:40:07.948777   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.948787   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:07.948795   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:07.948861   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:07.982262   68713 cri.go:89] found id: ""
	I0815 18:40:07.982288   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.982296   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:07.982302   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:07.982358   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:08.015560   68713 cri.go:89] found id: ""
	I0815 18:40:08.015588   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.015596   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:08.015602   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:08.015662   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:08.049854   68713 cri.go:89] found id: ""
	I0815 18:40:08.049878   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.049885   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:08.049893   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:08.049905   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:08.102269   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:08.102303   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:08.117181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:08.117209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:08.188586   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:08.188609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:08.188623   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:08.272204   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:08.272239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:10.813223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:10.826181   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:10.826257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:10.863728   68713 cri.go:89] found id: ""
	I0815 18:40:10.863753   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.863761   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:10.863766   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:10.863813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:10.898074   68713 cri.go:89] found id: ""
	I0815 18:40:10.898102   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.898113   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:10.898121   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:10.898183   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:10.933948   68713 cri.go:89] found id: ""
	I0815 18:40:10.933980   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.933991   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:10.933998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:10.934059   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:10.972402   68713 cri.go:89] found id: ""
	I0815 18:40:10.972428   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.972436   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:10.972442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:10.972509   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:11.006814   68713 cri.go:89] found id: ""
	I0815 18:40:11.006843   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.006851   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:11.006857   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:11.006909   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:11.042739   68713 cri.go:89] found id: ""
	I0815 18:40:11.042763   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.042771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:11.042777   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:11.042835   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:11.079132   68713 cri.go:89] found id: ""
	I0815 18:40:11.079164   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.079173   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:11.079179   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:11.079228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:11.113271   68713 cri.go:89] found id: ""
	I0815 18:40:11.113298   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.113309   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:11.113317   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:11.113328   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:11.166669   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:11.166698   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:11.180789   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:11.180815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:11.247954   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:11.247985   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:11.247999   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:11.331952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:11.331995   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:09.349194   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:11.349627   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.850439   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:11.655088   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.656674   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:12.658287   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:15.158361   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.874466   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:13.888346   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:13.888416   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:13.922542   68713 cri.go:89] found id: ""
	I0815 18:40:13.922569   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.922579   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:13.922586   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:13.922654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:13.958039   68713 cri.go:89] found id: ""
	I0815 18:40:13.958066   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.958076   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:13.958082   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:13.958131   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:13.994095   68713 cri.go:89] found id: ""
	I0815 18:40:13.994125   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.994136   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:13.994144   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:13.994195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:14.027918   68713 cri.go:89] found id: ""
	I0815 18:40:14.027949   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.027960   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:14.027969   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:14.028027   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:14.063849   68713 cri.go:89] found id: ""
	I0815 18:40:14.063879   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.063889   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:14.063897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:14.063957   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:14.098444   68713 cri.go:89] found id: ""
	I0815 18:40:14.098473   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.098483   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:14.098490   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:14.098553   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:14.136834   68713 cri.go:89] found id: ""
	I0815 18:40:14.136861   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.136874   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:14.136880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:14.136925   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:14.172377   68713 cri.go:89] found id: ""
	I0815 18:40:14.172400   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.172408   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:14.172415   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:14.172430   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:14.212212   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:14.212242   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:14.268412   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:14.268450   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:14.282978   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:14.283006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:14.352777   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:14.352796   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:14.352822   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:16.939906   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:16.953118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:16.953178   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:16.991697   68713 cri.go:89] found id: ""
	I0815 18:40:16.991723   68713 logs.go:276] 0 containers: []
	W0815 18:40:16.991731   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:16.991736   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:16.991801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:17.027572   68713 cri.go:89] found id: ""
	I0815 18:40:17.027602   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.027613   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:17.027623   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:17.027682   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:17.060718   68713 cri.go:89] found id: ""
	I0815 18:40:17.060750   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.060763   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:17.060771   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:17.060829   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:17.096746   68713 cri.go:89] found id: ""
	I0815 18:40:17.096771   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.096780   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:17.096786   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:17.096846   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:17.130755   68713 cri.go:89] found id: ""
	I0815 18:40:17.130791   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.130802   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:17.130810   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:17.130872   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:17.167991   68713 cri.go:89] found id: ""
	I0815 18:40:17.168016   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.168026   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:17.168034   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:17.168093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:17.200695   68713 cri.go:89] found id: ""
	I0815 18:40:17.200722   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.200733   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:17.200741   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:17.200799   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:17.237788   68713 cri.go:89] found id: ""
	I0815 18:40:17.237816   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.237824   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:17.237833   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:17.237848   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:17.288888   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:17.288921   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:17.302862   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:17.302903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:17.370062   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:17.370085   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:17.370100   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:17.444742   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:17.444781   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:16.349749   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:18.849197   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:16.155555   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:18.654875   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:17.160009   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:19.657774   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:19.984813   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:19.998010   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:19.998077   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:20.032880   68713 cri.go:89] found id: ""
	I0815 18:40:20.032903   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.032912   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:20.032918   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:20.032973   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:20.069191   68713 cri.go:89] found id: ""
	I0815 18:40:20.069224   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.069236   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:20.069243   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:20.069301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:20.101930   68713 cri.go:89] found id: ""
	I0815 18:40:20.101954   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.101962   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:20.101968   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:20.102016   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:20.136981   68713 cri.go:89] found id: ""
	I0815 18:40:20.137006   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.137014   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:20.137020   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:20.137066   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:20.174517   68713 cri.go:89] found id: ""
	I0815 18:40:20.174543   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.174550   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:20.174556   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:20.174611   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:20.208525   68713 cri.go:89] found id: ""
	I0815 18:40:20.208549   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.208559   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:20.208567   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:20.208626   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:20.240824   68713 cri.go:89] found id: ""
	I0815 18:40:20.240855   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.240867   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:20.240874   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:20.240946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:20.277683   68713 cri.go:89] found id: ""
	I0815 18:40:20.277710   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.277720   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:20.277728   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:20.277739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:20.324271   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:20.324304   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:20.376250   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:20.376285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:20.392777   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:20.392813   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:20.464122   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:20.464156   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:20.464180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:20.849461   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:22.849591   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:20.654982   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.154537   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:21.658354   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.658505   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.041684   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:23.055779   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:23.055858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:23.095391   68713 cri.go:89] found id: ""
	I0815 18:40:23.095414   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.095426   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:23.095432   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:23.095483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:23.134907   68713 cri.go:89] found id: ""
	I0815 18:40:23.134936   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.134943   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:23.134949   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:23.134994   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:23.171806   68713 cri.go:89] found id: ""
	I0815 18:40:23.171845   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.171854   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:23.171861   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:23.171924   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:23.205378   68713 cri.go:89] found id: ""
	I0815 18:40:23.205404   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.205412   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:23.205417   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:23.205467   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:23.239503   68713 cri.go:89] found id: ""
	I0815 18:40:23.239531   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.239540   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:23.239547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:23.239614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:23.275802   68713 cri.go:89] found id: ""
	I0815 18:40:23.275828   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.275842   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:23.275849   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:23.275894   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:23.310127   68713 cri.go:89] found id: ""
	I0815 18:40:23.310154   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.310167   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:23.310173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:23.310219   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:23.344646   68713 cri.go:89] found id: ""
	I0815 18:40:23.344674   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.344685   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:23.344696   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:23.344711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:23.397260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:23.397310   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:23.425518   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:23.425553   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:23.495528   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:23.495547   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:23.495562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:23.574489   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:23.574524   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.119044   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:26.133806   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:26.133880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:26.175683   68713 cri.go:89] found id: ""
	I0815 18:40:26.175711   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.175722   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:26.175730   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:26.175789   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:26.210634   68713 cri.go:89] found id: ""
	I0815 18:40:26.210658   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.210665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:26.210671   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:26.210724   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:26.244146   68713 cri.go:89] found id: ""
	I0815 18:40:26.244176   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.244187   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:26.244195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:26.244274   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:26.277312   68713 cri.go:89] found id: ""
	I0815 18:40:26.277335   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.277343   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:26.277349   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:26.277410   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:26.311538   68713 cri.go:89] found id: ""
	I0815 18:40:26.311562   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.311570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:26.311576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:26.311623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:26.347816   68713 cri.go:89] found id: ""
	I0815 18:40:26.347840   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.347847   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:26.347853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:26.347906   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:26.381211   68713 cri.go:89] found id: ""
	I0815 18:40:26.381234   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.381242   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:26.381248   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:26.381303   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:26.413982   68713 cri.go:89] found id: ""
	I0815 18:40:26.414010   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.414018   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:26.414027   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:26.414038   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:26.500686   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:26.500721   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.537615   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:26.537642   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:26.590119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:26.590150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:26.603713   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:26.603739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:26.675455   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:25.349400   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:27.853388   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:25.155463   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:27.155580   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:29.156973   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:26.158898   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:28.658576   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:29.176084   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:29.189743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:29.189813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:29.225500   68713 cri.go:89] found id: ""
	I0815 18:40:29.225536   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.225548   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:29.225557   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:29.225614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:29.261839   68713 cri.go:89] found id: ""
	I0815 18:40:29.261866   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.261877   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:29.261884   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:29.261946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:29.296685   68713 cri.go:89] found id: ""
	I0815 18:40:29.296708   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.296716   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:29.296728   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:29.296787   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:29.332524   68713 cri.go:89] found id: ""
	I0815 18:40:29.332550   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.332558   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:29.332564   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:29.332615   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:29.368918   68713 cri.go:89] found id: ""
	I0815 18:40:29.368943   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.368953   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:29.368961   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:29.369020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:29.403175   68713 cri.go:89] found id: ""
	I0815 18:40:29.403200   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.403211   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:29.403218   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:29.403279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:29.438957   68713 cri.go:89] found id: ""
	I0815 18:40:29.438981   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.438989   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:29.438994   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:29.439051   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:29.472153   68713 cri.go:89] found id: ""
	I0815 18:40:29.472184   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.472195   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:29.472206   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:29.472221   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:29.560484   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:29.560547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:29.600366   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:29.600402   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:29.656536   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:29.656569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:29.669899   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:29.669925   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:29.738515   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:32.239207   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:32.253976   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:32.254048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:32.290918   68713 cri.go:89] found id: ""
	I0815 18:40:32.290942   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.290951   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:32.290957   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:32.291009   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:32.325567   68713 cri.go:89] found id: ""
	I0815 18:40:32.325596   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.325606   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:32.325613   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:32.325674   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:32.360959   68713 cri.go:89] found id: ""
	I0815 18:40:32.360994   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.361005   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:32.361015   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:32.361090   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:32.398583   68713 cri.go:89] found id: ""
	I0815 18:40:32.398614   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.398625   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:32.398633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:32.398696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:32.432980   68713 cri.go:89] found id: ""
	I0815 18:40:32.433007   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.433017   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:32.433024   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:32.433088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:32.467645   68713 cri.go:89] found id: ""
	I0815 18:40:32.467678   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.467688   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:32.467697   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:32.467757   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:32.504233   68713 cri.go:89] found id: ""
	I0815 18:40:32.504265   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.504275   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:32.504282   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:32.504347   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:32.539127   68713 cri.go:89] found id: ""
	I0815 18:40:32.539160   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.539175   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:32.539186   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:32.539200   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:32.620782   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:32.620818   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:32.660920   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:32.660950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:32.714392   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:32.714425   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:32.727629   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:32.727655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:40:30.349267   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:32.349896   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:31.655451   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:34.154871   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:31.157219   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:33.158733   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:35.158871   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	W0815 18:40:32.801258   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.301393   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:35.315460   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:35.315515   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:35.352266   68713 cri.go:89] found id: ""
	I0815 18:40:35.352287   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.352295   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:35.352301   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:35.352345   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:35.387274   68713 cri.go:89] found id: ""
	I0815 18:40:35.387305   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.387316   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:35.387324   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:35.387386   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:35.422376   68713 cri.go:89] found id: ""
	I0815 18:40:35.422403   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.422413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:35.422419   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:35.422464   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:35.456423   68713 cri.go:89] found id: ""
	I0815 18:40:35.456452   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.456459   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:35.456465   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:35.456544   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:35.494878   68713 cri.go:89] found id: ""
	I0815 18:40:35.494903   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.494912   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:35.494919   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:35.494980   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:35.528027   68713 cri.go:89] found id: ""
	I0815 18:40:35.528051   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.528062   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:35.528069   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:35.528128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:35.568543   68713 cri.go:89] found id: ""
	I0815 18:40:35.568570   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.568580   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:35.568587   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:35.568654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:35.627717   68713 cri.go:89] found id: ""
	I0815 18:40:35.627747   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.627766   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:35.627777   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:35.627792   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:35.691497   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:35.691530   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:35.705062   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:35.705092   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:35.783785   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.783806   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:35.783819   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:35.867282   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:35.867317   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:34.848226   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:36.849242   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.850686   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:36.154981   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.155165   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:37.659017   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:40.158408   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.407940   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:38.421571   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:38.421648   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:38.456551   68713 cri.go:89] found id: ""
	I0815 18:40:38.456586   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.456597   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:38.456604   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:38.456665   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:38.494133   68713 cri.go:89] found id: ""
	I0815 18:40:38.494167   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.494179   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:38.494186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:38.494253   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:38.531566   68713 cri.go:89] found id: ""
	I0815 18:40:38.531599   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.531610   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:38.531617   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:38.531678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:38.567613   68713 cri.go:89] found id: ""
	I0815 18:40:38.567640   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.567652   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:38.567659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:38.567717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:38.603172   68713 cri.go:89] found id: ""
	I0815 18:40:38.603201   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.603212   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:38.603225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:38.603284   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:38.639600   68713 cri.go:89] found id: ""
	I0815 18:40:38.639629   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.639640   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:38.639648   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:38.639710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:38.675780   68713 cri.go:89] found id: ""
	I0815 18:40:38.675811   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.675821   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:38.675828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:38.675885   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:38.708745   68713 cri.go:89] found id: ""
	I0815 18:40:38.708775   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.708786   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:38.708796   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:38.708815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:38.722485   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:38.722514   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:38.793913   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:38.793936   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:38.793950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:38.880706   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:38.880744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:38.919505   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:38.919533   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.472452   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:41.486204   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:41.486264   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:41.520251   68713 cri.go:89] found id: ""
	I0815 18:40:41.520282   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.520294   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:41.520302   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:41.520362   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:41.561294   68713 cri.go:89] found id: ""
	I0815 18:40:41.561325   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.561336   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:41.561343   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:41.561403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:41.595290   68713 cri.go:89] found id: ""
	I0815 18:40:41.595318   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.595326   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:41.595331   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:41.595381   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:41.629706   68713 cri.go:89] found id: ""
	I0815 18:40:41.629736   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.629744   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:41.629750   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:41.629816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:41.671862   68713 cri.go:89] found id: ""
	I0815 18:40:41.671885   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.671893   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:41.671898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:41.671951   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:41.710298   68713 cri.go:89] found id: ""
	I0815 18:40:41.710349   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.710360   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:41.710368   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:41.710425   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:41.745434   68713 cri.go:89] found id: ""
	I0815 18:40:41.745472   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.745487   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:41.745492   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:41.745548   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:41.781038   68713 cri.go:89] found id: ""
	I0815 18:40:41.781073   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.781081   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:41.781088   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:41.781099   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:41.863977   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:41.864023   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:41.907477   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:41.907505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.962921   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:41.962956   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:41.976458   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:41.976505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:42.044372   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:41.349260   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:43.349615   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:40.656633   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:43.154626   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:42.658519   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:44.659640   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:44.544803   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:44.559538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:44.559595   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:44.595471   68713 cri.go:89] found id: ""
	I0815 18:40:44.595501   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.595511   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:44.595518   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:44.595581   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:44.630148   68713 cri.go:89] found id: ""
	I0815 18:40:44.630173   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.630181   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:44.630189   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:44.630245   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:44.666084   68713 cri.go:89] found id: ""
	I0815 18:40:44.666110   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.666119   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:44.666126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:44.666180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:44.700286   68713 cri.go:89] found id: ""
	I0815 18:40:44.700320   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.700331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:44.700339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:44.700394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:44.734115   68713 cri.go:89] found id: ""
	I0815 18:40:44.734143   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.734151   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:44.734157   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:44.734216   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:44.770306   68713 cri.go:89] found id: ""
	I0815 18:40:44.770363   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.770376   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:44.770383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:44.770453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:44.806766   68713 cri.go:89] found id: ""
	I0815 18:40:44.806790   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.806798   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:44.806803   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:44.806865   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:44.843574   68713 cri.go:89] found id: ""
	I0815 18:40:44.843603   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.843613   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:44.843623   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:44.843638   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:44.896119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:44.896148   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:44.909537   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:44.909562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:44.980268   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:44.980290   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:44.980307   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:45.066589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:45.066626   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:47.605934   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:47.620644   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:47.620709   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:47.660939   68713 cri.go:89] found id: ""
	I0815 18:40:47.660960   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.660967   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:47.660973   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:47.661021   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:47.701018   68713 cri.go:89] found id: ""
	I0815 18:40:47.701047   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.701059   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:47.701107   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:47.701177   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:47.739487   68713 cri.go:89] found id: ""
	I0815 18:40:47.739514   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.739523   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:47.739528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:47.739584   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:47.781483   68713 cri.go:89] found id: ""
	I0815 18:40:47.781508   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.781515   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:47.781520   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:47.781571   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:45.850565   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.851368   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:45.156177   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.654437   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.157895   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:49.658101   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.816781   68713 cri.go:89] found id: ""
	I0815 18:40:47.816806   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.816813   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:47.816819   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:47.816875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:47.853951   68713 cri.go:89] found id: ""
	I0815 18:40:47.853976   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.853984   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:47.853990   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:47.854062   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:47.892208   68713 cri.go:89] found id: ""
	I0815 18:40:47.892237   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.892246   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:47.892252   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:47.892311   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:47.926916   68713 cri.go:89] found id: ""
	I0815 18:40:47.926944   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.926965   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:47.926976   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:47.926990   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:48.002907   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:48.002927   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:48.002942   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:48.085727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:48.085762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:48.127192   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:48.127224   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:48.180172   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:48.180208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:50.694573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:50.709411   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:50.709472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:50.750956   68713 cri.go:89] found id: ""
	I0815 18:40:50.750985   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.750994   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:50.751000   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:50.751048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:50.791072   68713 cri.go:89] found id: ""
	I0815 18:40:50.791149   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.791174   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:50.791186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:50.791247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:50.827692   68713 cri.go:89] found id: ""
	I0815 18:40:50.827717   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.827728   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:50.827735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:50.827794   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:50.866587   68713 cri.go:89] found id: ""
	I0815 18:40:50.866616   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.866626   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:50.866633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:50.866692   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:50.907012   68713 cri.go:89] found id: ""
	I0815 18:40:50.907040   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.907047   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:50.907053   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:50.907101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:50.951212   68713 cri.go:89] found id: ""
	I0815 18:40:50.951243   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.951256   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:50.951263   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:50.951316   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:50.989771   68713 cri.go:89] found id: ""
	I0815 18:40:50.989802   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.989812   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:50.989818   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:50.989867   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:51.024423   68713 cri.go:89] found id: ""
	I0815 18:40:51.024454   68713 logs.go:276] 0 containers: []
	W0815 18:40:51.024465   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:51.024475   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:51.024500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:51.076973   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:51.077012   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:51.090963   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:51.090989   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:51.169981   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:51.170005   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:51.170029   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:51.248990   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:51.249040   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:50.349092   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:52.350278   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:50.154517   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:52.148131   68248 pod_ready.go:82] duration metric: took 4m0.000077937s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" ...
	E0815 18:40:52.148161   68248 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 18:40:52.148183   68248 pod_ready.go:39] duration metric: took 4m13.224994468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:40:52.148235   68248 kubeadm.go:597] duration metric: took 4m20.945128985s to restartPrimaryControlPlane
	W0815 18:40:52.148324   68248 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 18:40:52.148376   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:40:51.660289   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:54.157718   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:53.790172   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:53.803752   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:53.803816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:53.843203   68713 cri.go:89] found id: ""
	I0815 18:40:53.843231   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.843246   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:53.843254   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:53.843314   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:53.878975   68713 cri.go:89] found id: ""
	I0815 18:40:53.879000   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.879008   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:53.879013   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:53.879078   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:53.915640   68713 cri.go:89] found id: ""
	I0815 18:40:53.915668   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.915675   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:53.915683   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:53.915746   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:53.956312   68713 cri.go:89] found id: ""
	I0815 18:40:53.956340   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.956356   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:53.956365   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:53.956426   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:53.992276   68713 cri.go:89] found id: ""
	I0815 18:40:53.992304   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.992314   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:53.992322   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:53.992387   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:54.034653   68713 cri.go:89] found id: ""
	I0815 18:40:54.034682   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.034693   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:54.034701   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:54.034761   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:54.072993   68713 cri.go:89] found id: ""
	I0815 18:40:54.073018   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.073027   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:54.073038   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:54.073107   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:54.107414   68713 cri.go:89] found id: ""
	I0815 18:40:54.107446   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.107456   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:54.107466   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:54.107481   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:54.145900   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:54.145928   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:54.197609   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:54.197639   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:54.211384   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:54.211410   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:54.280991   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:54.281018   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:54.281031   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:56.868270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:56.881168   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:56.881248   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:56.915206   68713 cri.go:89] found id: ""
	I0815 18:40:56.915235   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.915243   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:56.915249   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:56.915308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:56.950838   68713 cri.go:89] found id: ""
	I0815 18:40:56.950864   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.950873   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:56.950879   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:56.950937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:56.993625   68713 cri.go:89] found id: ""
	I0815 18:40:56.993649   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.993656   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:56.993662   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:56.993718   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:57.029109   68713 cri.go:89] found id: ""
	I0815 18:40:57.029139   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.029150   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:57.029158   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:57.029213   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:57.063480   68713 cri.go:89] found id: ""
	I0815 18:40:57.063518   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.063530   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:57.063538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:57.063598   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:57.102830   68713 cri.go:89] found id: ""
	I0815 18:40:57.102859   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.102870   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:57.102877   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:57.102938   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:57.137116   68713 cri.go:89] found id: ""
	I0815 18:40:57.137146   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.137159   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:57.137173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:57.137235   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:57.174678   68713 cri.go:89] found id: ""
	I0815 18:40:57.174706   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.174717   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:57.174727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:57.174741   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:57.213270   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:57.213311   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:57.269463   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:57.269500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:57.283891   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:57.283915   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:57.355563   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:57.355589   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:57.355601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:54.849266   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:57.350343   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:56.657843   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:58.658098   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:59.943493   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:59.957225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:59.957285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:59.993113   68713 cri.go:89] found id: ""
	I0815 18:40:59.993142   68713 logs.go:276] 0 containers: []
	W0815 18:40:59.993153   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:59.993167   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:59.993228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:00.033485   68713 cri.go:89] found id: ""
	I0815 18:41:00.033515   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.033525   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:00.033533   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:00.033594   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:00.070808   68713 cri.go:89] found id: ""
	I0815 18:41:00.070830   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.070838   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:00.070844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:00.070893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:00.113043   68713 cri.go:89] found id: ""
	I0815 18:41:00.113067   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.113076   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:00.113082   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:00.113139   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:00.148089   68713 cri.go:89] found id: ""
	I0815 18:41:00.148118   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.148129   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:00.148136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:00.148206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:00.188343   68713 cri.go:89] found id: ""
	I0815 18:41:00.188375   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.188386   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:00.188394   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:00.188448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:00.224287   68713 cri.go:89] found id: ""
	I0815 18:41:00.224312   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.224323   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:00.224337   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:00.224398   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:00.263983   68713 cri.go:89] found id: ""
	I0815 18:41:00.264008   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.264016   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:00.264025   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:00.264037   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:00.278057   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:00.278083   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:00.355112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:00.355133   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:00.355146   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:00.436636   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:00.436672   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:00.474774   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:00.474801   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:59.849797   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:02.349363   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:01.158004   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:03.158380   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:05.658860   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:03.027434   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:03.041422   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:03.041496   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:03.074093   68713 cri.go:89] found id: ""
	I0815 18:41:03.074119   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.074130   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:03.074138   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:03.074198   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:03.111489   68713 cri.go:89] found id: ""
	I0815 18:41:03.111517   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.111529   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:03.111537   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:03.111599   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:03.147716   68713 cri.go:89] found id: ""
	I0815 18:41:03.147747   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.147756   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:03.147762   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:03.147825   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:03.184609   68713 cri.go:89] found id: ""
	I0815 18:41:03.184635   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.184644   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:03.184652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:03.184710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:03.221839   68713 cri.go:89] found id: ""
	I0815 18:41:03.221869   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.221878   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:03.221883   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:03.221935   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:03.262619   68713 cri.go:89] found id: ""
	I0815 18:41:03.262649   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.262661   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:03.262669   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:03.262733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:03.297826   68713 cri.go:89] found id: ""
	I0815 18:41:03.297849   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.297864   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:03.297875   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:03.297922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:03.345046   68713 cri.go:89] found id: ""
	I0815 18:41:03.345074   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.345083   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:03.345095   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:03.345133   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:03.416878   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:03.416905   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:03.416920   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:03.491548   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:03.491583   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:03.533821   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:03.533852   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:03.587749   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:03.587787   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.104002   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:06.118123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:06.118195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:06.156179   68713 cri.go:89] found id: ""
	I0815 18:41:06.156204   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.156213   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:06.156218   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:06.156275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:06.192834   68713 cri.go:89] found id: ""
	I0815 18:41:06.192858   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.192866   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:06.192871   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:06.192918   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:06.228355   68713 cri.go:89] found id: ""
	I0815 18:41:06.228379   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.228387   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:06.228393   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:06.228453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:06.262041   68713 cri.go:89] found id: ""
	I0815 18:41:06.262068   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.262079   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:06.262086   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:06.262152   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:06.303217   68713 cri.go:89] found id: ""
	I0815 18:41:06.303249   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.303261   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:06.303268   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:06.303335   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:06.337180   68713 cri.go:89] found id: ""
	I0815 18:41:06.337208   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.337215   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:06.337222   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:06.337270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:06.375054   68713 cri.go:89] found id: ""
	I0815 18:41:06.375081   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.375088   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:06.375095   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:06.375163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:06.412188   68713 cri.go:89] found id: ""
	I0815 18:41:06.412216   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.412227   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:06.412239   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:06.412255   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.425607   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:06.425633   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:06.500853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:06.500872   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:06.500883   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:06.577297   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:06.577333   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:06.620209   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:06.620239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:04.848677   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:06.849254   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:08.849300   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:08.157734   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:10.157969   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:09.171606   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:09.184197   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:09.184257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:09.217865   68713 cri.go:89] found id: ""
	I0815 18:41:09.217893   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.217904   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:09.217912   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:09.217967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:09.254032   68713 cri.go:89] found id: ""
	I0815 18:41:09.254055   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.254064   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:09.254073   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:09.254128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:09.291772   68713 cri.go:89] found id: ""
	I0815 18:41:09.291798   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.291808   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:09.291816   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:09.291880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:09.326695   68713 cri.go:89] found id: ""
	I0815 18:41:09.326717   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.326726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:09.326731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:09.326791   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:09.365779   68713 cri.go:89] found id: ""
	I0815 18:41:09.365807   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.365818   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:09.365825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:09.365880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:09.413475   68713 cri.go:89] found id: ""
	I0815 18:41:09.413500   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.413509   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:09.413514   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:09.413578   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:09.449483   68713 cri.go:89] found id: ""
	I0815 18:41:09.449511   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.449521   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:09.449528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:09.449623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:09.487484   68713 cri.go:89] found id: ""
	I0815 18:41:09.487513   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.487525   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:09.487535   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:09.487549   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:09.536746   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:09.536777   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:09.549912   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:09.549944   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:09.619192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:09.619227   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:09.619246   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:09.698370   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:09.698404   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:12.240745   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:12.254814   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:12.254875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:12.291346   68713 cri.go:89] found id: ""
	I0815 18:41:12.291376   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.291387   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:12.291395   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:12.291456   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:12.324832   68713 cri.go:89] found id: ""
	I0815 18:41:12.324867   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.324878   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:12.324886   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:12.324950   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:12.360172   68713 cri.go:89] found id: ""
	I0815 18:41:12.360193   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.360201   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:12.360206   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:12.360251   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:12.394671   68713 cri.go:89] found id: ""
	I0815 18:41:12.394700   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.394710   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:12.394731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:12.394800   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:12.428951   68713 cri.go:89] found id: ""
	I0815 18:41:12.428999   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.429007   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:12.429013   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:12.429057   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:12.466035   68713 cri.go:89] found id: ""
	I0815 18:41:12.466061   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.466069   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:12.466075   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:12.466125   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:12.500003   68713 cri.go:89] found id: ""
	I0815 18:41:12.500031   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.500042   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:12.500050   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:12.500105   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:12.537433   68713 cri.go:89] found id: ""
	I0815 18:41:12.537457   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.537464   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:12.537473   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:12.537484   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:12.586768   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:12.586809   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:12.600549   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:12.600578   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:12.673112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:12.673138   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:12.673154   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:12.754689   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:12.754726   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:11.348767   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:13.349973   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:12.158249   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:13.158354   68429 pod_ready.go:82] duration metric: took 4m0.006607137s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	E0815 18:41:13.158373   68429 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 18:41:13.158381   68429 pod_ready.go:39] duration metric: took 4m7.064501997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:13.158395   68429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:13.158423   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:13.158467   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:13.203746   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:13.203771   68429 cri.go:89] found id: ""
	I0815 18:41:13.203779   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:13.203840   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.208188   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:13.208248   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:13.245326   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:13.245351   68429 cri.go:89] found id: ""
	I0815 18:41:13.245359   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:13.245412   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.250212   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:13.250281   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:13.296537   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:13.296565   68429 cri.go:89] found id: ""
	I0815 18:41:13.296576   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:13.296634   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.300823   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:13.300881   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:13.337973   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:13.338018   68429 cri.go:89] found id: ""
	I0815 18:41:13.338031   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:13.338083   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.342251   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:13.342307   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:13.379921   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:13.379948   68429 cri.go:89] found id: ""
	I0815 18:41:13.379957   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:13.380005   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.384451   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:13.384539   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:13.421077   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:13.421113   68429 cri.go:89] found id: ""
	I0815 18:41:13.421122   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:13.421180   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.425566   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:13.425640   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:13.468663   68429 cri.go:89] found id: ""
	I0815 18:41:13.468688   68429 logs.go:276] 0 containers: []
	W0815 18:41:13.468696   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:13.468701   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:13.468753   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:13.506689   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:13.506711   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:13.506715   68429 cri.go:89] found id: ""
	I0815 18:41:13.506723   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:13.506784   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.511177   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.515519   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:13.515543   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:13.583771   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:13.583806   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:13.714906   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:13.714945   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:13.766512   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:13.766548   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:13.818416   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:13.818450   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:13.859035   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:13.859073   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:13.901515   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:13.901546   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:14.437262   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:14.437304   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:14.453511   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:14.453551   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:14.489238   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:14.489267   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:14.540141   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:14.540184   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:14.574758   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:14.574785   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:14.609370   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:14.609398   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:15.294667   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:15.307758   68713 kubeadm.go:597] duration metric: took 4m2.67500099s to restartPrimaryControlPlane
	W0815 18:41:15.307840   68713 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 18:41:15.307872   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:41:15.761255   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:15.776049   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:41:15.786643   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:41:15.796517   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:41:15.796537   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:41:15.796585   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:41:15.806118   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:41:15.806167   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:41:15.816363   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:41:15.826396   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:41:15.826449   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:41:15.836538   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.847035   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:41:15.847093   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.857475   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:41:15.867084   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:41:15.867144   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:41:15.879736   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:41:15.954497   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:41:15.954588   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:41:16.098128   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:41:16.098244   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:41:16.098345   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:41:16.288507   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:41:16.290439   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:41:16.290555   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:41:16.290656   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:41:16.290756   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:41:16.290831   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:41:16.290923   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:41:16.291003   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:41:16.291096   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:41:16.291182   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:41:16.291280   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:41:16.291396   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:41:16.291457   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:41:16.291509   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:41:16.363570   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:41:16.549782   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:41:16.789250   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:41:16.983388   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:41:17.004293   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:41:17.006438   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:41:17.006485   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:41:17.154583   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:41:17.156594   68713 out.go:235]   - Booting up control plane ...
	I0815 18:41:17.156717   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:41:17.177351   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:41:17.179286   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:41:17.180313   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:41:17.183829   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:41:15.850424   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:18.348986   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:18.430273   68248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.281857018s)
	I0815 18:41:18.430359   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:18.445633   68248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:41:18.457459   68248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:41:18.469748   68248 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:41:18.469769   68248 kubeadm.go:157] found existing configuration files:
	
	I0815 18:41:18.469818   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:41:18.480099   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:41:18.480146   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:41:18.491871   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:41:18.501274   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:41:18.501339   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:41:18.510186   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:41:18.518803   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:41:18.518863   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:41:18.527843   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:41:18.536437   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:41:18.536514   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:41:18.545573   68248 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:41:18.596478   68248 kubeadm.go:310] W0815 18:41:18.577134    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:41:18.597311   68248 kubeadm.go:310] W0815 18:41:18.577958    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:41:18.709937   68248 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:41:17.151343   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:17.173653   68429 api_server.go:72] duration metric: took 4m18.293407117s to wait for apiserver process to appear ...
	I0815 18:41:17.173677   68429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:41:17.173724   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:17.173784   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:17.211484   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:17.211509   68429 cri.go:89] found id: ""
	I0815 18:41:17.211518   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:17.211583   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.216011   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:17.216107   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:17.265454   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:17.265486   68429 cri.go:89] found id: ""
	I0815 18:41:17.265497   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:17.265554   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.269804   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:17.269868   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:17.310339   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:17.310363   68429 cri.go:89] found id: ""
	I0815 18:41:17.310371   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:17.310435   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.315639   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:17.315695   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:17.352364   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:17.352387   68429 cri.go:89] found id: ""
	I0815 18:41:17.352396   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:17.352452   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.356782   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:17.356848   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:17.396704   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:17.396733   68429 cri.go:89] found id: ""
	I0815 18:41:17.396744   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:17.396799   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.400920   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:17.400985   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:17.440361   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:17.440390   68429 cri.go:89] found id: ""
	I0815 18:41:17.440400   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:17.440464   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.445057   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:17.445127   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:17.487341   68429 cri.go:89] found id: ""
	I0815 18:41:17.487369   68429 logs.go:276] 0 containers: []
	W0815 18:41:17.487380   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:17.487388   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:17.487446   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:17.528197   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:17.528218   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:17.528223   68429 cri.go:89] found id: ""
	I0815 18:41:17.528229   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:17.528285   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.532536   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.536745   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:17.536768   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:17.574236   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:17.574268   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:17.617822   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:17.617853   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:17.673009   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:17.673037   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:17.717620   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:17.717647   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:17.764641   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:17.764671   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:17.815586   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:17.815618   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:17.855287   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:17.855310   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:17.906486   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:17.906514   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:17.941540   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:17.941562   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:18.373461   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:18.373497   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:18.454203   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:18.454244   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:18.470284   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:18.470315   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:20.349635   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:22.350034   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:21.080947   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:41:21.085334   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0815 18:41:21.086420   68429 api_server.go:141] control plane version: v1.31.0
	I0815 18:41:21.086442   68429 api_server.go:131] duration metric: took 3.912756949s to wait for apiserver health ...
	I0815 18:41:21.086452   68429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:41:21.086481   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:21.086537   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:21.124183   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:21.124210   68429 cri.go:89] found id: ""
	I0815 18:41:21.124218   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:21.124285   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.128402   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:21.128472   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:21.164737   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:21.164768   68429 cri.go:89] found id: ""
	I0815 18:41:21.164779   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:21.164835   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.170622   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:21.170699   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:21.206823   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:21.206847   68429 cri.go:89] found id: ""
	I0815 18:41:21.206855   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:21.206910   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.211055   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:21.211128   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:21.255529   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:21.255555   68429 cri.go:89] found id: ""
	I0815 18:41:21.255565   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:21.255629   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.260062   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:21.260139   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:21.298058   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:21.298116   68429 cri.go:89] found id: ""
	I0815 18:41:21.298124   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:21.298180   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.302821   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:21.302892   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:21.340895   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:21.340925   68429 cri.go:89] found id: ""
	I0815 18:41:21.340936   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:21.341003   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.345545   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:21.345638   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:21.383180   68429 cri.go:89] found id: ""
	I0815 18:41:21.383212   68429 logs.go:276] 0 containers: []
	W0815 18:41:21.383223   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:21.383232   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:21.383301   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:21.421152   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:21.421178   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:21.421185   68429 cri.go:89] found id: ""
	I0815 18:41:21.421198   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:21.421257   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.426326   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.430307   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:21.430351   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:21.562655   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:21.562697   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:21.613436   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:21.613470   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:21.674678   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:21.674721   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:21.717283   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:21.717316   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:21.760218   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:21.760249   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:21.802313   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:21.802352   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:21.874565   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:21.874608   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:21.891629   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:21.891666   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:21.934128   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:21.934170   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:21.985467   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:21.985497   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:22.023731   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:22.023770   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:22.403584   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:22.403626   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:25.005734   68429 system_pods.go:59] 8 kube-system pods found
	I0815 18:41:25.005760   68429 system_pods.go:61] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running
	I0815 18:41:25.005766   68429 system_pods.go:61] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running
	I0815 18:41:25.005770   68429 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running
	I0815 18:41:25.005775   68429 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running
	I0815 18:41:25.005778   68429 system_pods.go:61] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:41:25.005781   68429 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running
	I0815 18:41:25.005788   68429 system_pods.go:61] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:25.005793   68429 system_pods.go:61] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:41:25.005799   68429 system_pods.go:74] duration metric: took 3.919341536s to wait for pod list to return data ...
	I0815 18:41:25.005806   68429 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:41:25.008398   68429 default_sa.go:45] found service account: "default"
	I0815 18:41:25.008419   68429 default_sa.go:55] duration metric: took 2.608281ms for default service account to be created ...
	I0815 18:41:25.008427   68429 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:41:25.012784   68429 system_pods.go:86] 8 kube-system pods found
	I0815 18:41:25.012804   68429 system_pods.go:89] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running
	I0815 18:41:25.012810   68429 system_pods.go:89] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running
	I0815 18:41:25.012817   68429 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running
	I0815 18:41:25.012821   68429 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running
	I0815 18:41:25.012825   68429 system_pods.go:89] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:41:25.012828   68429 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running
	I0815 18:41:25.012834   68429 system_pods.go:89] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:25.012838   68429 system_pods.go:89] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:41:25.012850   68429 system_pods.go:126] duration metric: took 4.415694ms to wait for k8s-apps to be running ...
	I0815 18:41:25.012858   68429 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:41:25.012905   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:25.028245   68429 system_svc.go:56] duration metric: took 15.378403ms WaitForService to wait for kubelet
	I0815 18:41:25.028272   68429 kubeadm.go:582] duration metric: took 4m26.148030358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:41:25.028290   68429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:41:25.030696   68429 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:41:25.030717   68429 node_conditions.go:123] node cpu capacity is 2
	I0815 18:41:25.030728   68429 node_conditions.go:105] duration metric: took 2.43352ms to run NodePressure ...
	I0815 18:41:25.030742   68429 start.go:241] waiting for startup goroutines ...
	I0815 18:41:25.030751   68429 start.go:246] waiting for cluster config update ...
	I0815 18:41:25.030768   68429 start.go:255] writing updated cluster config ...
	I0815 18:41:25.031028   68429 ssh_runner.go:195] Run: rm -f paused
	I0815 18:41:25.077910   68429 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:41:25.079973   68429 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-423062" cluster and "default" namespace by default
	I0815 18:41:27.911884   68248 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 18:41:27.911943   68248 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:41:27.912011   68248 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:41:27.912130   68248 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:41:27.912272   68248 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 18:41:27.912359   68248 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:41:27.913884   68248 out.go:235]   - Generating certificates and keys ...
	I0815 18:41:27.913990   68248 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:41:27.914092   68248 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:41:27.914197   68248 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:41:27.914289   68248 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:41:27.914362   68248 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:41:27.914433   68248 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:41:27.914521   68248 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:41:27.914606   68248 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:41:27.914859   68248 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:41:27.914984   68248 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:41:27.915040   68248 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:41:27.915119   68248 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:41:27.915190   68248 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:41:27.915268   68248 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 18:41:27.915336   68248 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:41:27.915419   68248 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:41:27.915500   68248 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:41:27.915606   68248 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:41:27.915691   68248 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:41:27.917229   68248 out.go:235]   - Booting up control plane ...
	I0815 18:41:27.917311   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:41:27.917377   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:41:27.917433   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:41:27.917521   68248 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:41:27.917590   68248 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:41:27.917623   68248 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:41:27.917740   68248 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 18:41:27.917829   68248 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 18:41:27.917880   68248 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00200618s
	I0815 18:41:27.917954   68248 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 18:41:27.918011   68248 kubeadm.go:310] [api-check] The API server is healthy after 5.501605719s
	I0815 18:41:27.918122   68248 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 18:41:27.918268   68248 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 18:41:27.918361   68248 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 18:41:27.918626   68248 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-555028 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 18:41:27.918723   68248 kubeadm.go:310] [bootstrap-token] Using token: 99xu37.bm6hiisu91f6rbvd
	I0815 18:41:27.920248   68248 out.go:235]   - Configuring RBAC rules ...
	I0815 18:41:27.920360   68248 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 18:41:27.920467   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 18:41:27.920651   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 18:41:27.920785   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 18:41:27.920938   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 18:41:27.921052   68248 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 18:41:27.921225   68248 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 18:41:27.921286   68248 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 18:41:27.921356   68248 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 18:41:27.921369   68248 kubeadm.go:310] 
	I0815 18:41:27.921422   68248 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 18:41:27.921428   68248 kubeadm.go:310] 
	I0815 18:41:27.921488   68248 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 18:41:27.921497   68248 kubeadm.go:310] 
	I0815 18:41:27.921521   68248 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 18:41:27.921570   68248 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 18:41:27.921619   68248 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 18:41:27.921625   68248 kubeadm.go:310] 
	I0815 18:41:27.921698   68248 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 18:41:27.921711   68248 kubeadm.go:310] 
	I0815 18:41:27.921776   68248 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 18:41:27.921787   68248 kubeadm.go:310] 
	I0815 18:41:27.921858   68248 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 18:41:27.921963   68248 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 18:41:27.922055   68248 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 18:41:27.922064   68248 kubeadm.go:310] 
	I0815 18:41:27.922166   68248 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 18:41:27.922281   68248 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 18:41:27.922306   68248 kubeadm.go:310] 
	I0815 18:41:27.922413   68248 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 99xu37.bm6hiisu91f6rbvd \
	I0815 18:41:27.922550   68248 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 \
	I0815 18:41:27.922593   68248 kubeadm.go:310] 	--control-plane 
	I0815 18:41:27.922603   68248 kubeadm.go:310] 
	I0815 18:41:27.922703   68248 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 18:41:27.922712   68248 kubeadm.go:310] 
	I0815 18:41:27.922800   68248 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 99xu37.bm6hiisu91f6rbvd \
	I0815 18:41:27.922901   68248 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 
	I0815 18:41:27.922909   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:41:27.922916   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:41:27.924596   68248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:41:24.849483   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:27.350715   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:27.926142   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:41:27.938307   68248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:41:27.958862   68248 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:41:27.958974   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:27.959032   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-555028 minikube.k8s.io/updated_at=2024_08_15T18_41_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=embed-certs-555028 minikube.k8s.io/primary=true
	I0815 18:41:28.156844   68248 ops.go:34] apiserver oom_adj: -16
	I0815 18:41:28.157122   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:28.657735   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:29.157713   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:29.658109   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:30.157486   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:30.657573   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.157463   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.658073   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.757929   68248 kubeadm.go:1113] duration metric: took 3.799012728s to wait for elevateKubeSystemPrivileges
	I0815 18:41:31.757969   68248 kubeadm.go:394] duration metric: took 5m0.607372858s to StartCluster
	I0815 18:41:31.757992   68248 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:41:31.758070   68248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:41:31.759686   68248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:41:31.759915   68248 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:41:31.759982   68248 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:41:31.760072   68248 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-555028"
	I0815 18:41:31.760090   68248 addons.go:69] Setting default-storageclass=true in profile "embed-certs-555028"
	I0815 18:41:31.760115   68248 addons.go:69] Setting metrics-server=true in profile "embed-certs-555028"
	I0815 18:41:31.760133   68248 addons.go:234] Setting addon metrics-server=true in "embed-certs-555028"
	W0815 18:41:31.760141   68248 addons.go:243] addon metrics-server should already be in state true
	I0815 18:41:31.760148   68248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-555028"
	I0815 18:41:31.760174   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.760110   68248 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-555028"
	W0815 18:41:31.760230   68248 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:41:31.760270   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.760108   68248 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:41:31.760603   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760619   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760637   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.760642   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.760658   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760708   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.761566   68248 out.go:177] * Verifying Kubernetes components...
	I0815 18:41:31.762780   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:41:31.777893   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I0815 18:41:31.778444   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.779021   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.779049   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.779496   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.780129   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.780182   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.780954   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0815 18:41:31.781146   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0815 18:41:31.781506   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.781586   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.782056   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.782061   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.782078   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.782079   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.782437   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.782494   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.782685   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.783004   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.783034   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.786246   68248 addons.go:234] Setting addon default-storageclass=true in "embed-certs-555028"
	W0815 18:41:31.786270   68248 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:41:31.786300   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.786682   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.786714   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.800152   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36619
	I0815 18:41:31.800729   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.801272   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.801295   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.801656   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.801835   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.803539   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0815 18:41:31.803751   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.804058   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.804640   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.804660   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.805007   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.805157   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.806098   68248 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:41:31.806397   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0815 18:41:31.806814   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.807269   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.807450   68248 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:41:31.807466   68248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:41:31.807484   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.807744   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.807757   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.808066   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.808889   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.808923   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.809143   68248 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:41:31.810575   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:41:31.810593   68248 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:41:31.810619   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.810648   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.811760   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.811761   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.811802   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.811948   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.812101   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.812243   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.814211   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.814653   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.814675   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.814953   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.815117   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.815271   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.815391   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.829657   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
	I0815 18:41:31.830122   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.830710   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.830734   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.831077   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.831291   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.833016   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.833271   68248 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:41:31.833285   68248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:41:31.833302   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.836248   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.836655   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.836682   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.836908   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.837097   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.837233   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.837410   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.988466   68248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:41:32.010147   68248 node_ready.go:35] waiting up to 6m0s for node "embed-certs-555028" to be "Ready" ...
	I0815 18:41:32.019505   68248 node_ready.go:49] node "embed-certs-555028" has status "Ready":"True"
	I0815 18:41:32.019529   68248 node_ready.go:38] duration metric: took 9.346825ms for node "embed-certs-555028" to be "Ready" ...
	I0815 18:41:32.019541   68248 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:32.032036   68248 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:32.125991   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:41:32.138532   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:41:32.138554   68248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:41:32.155222   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:41:32.196478   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:41:32.196517   68248 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:41:32.270461   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:41:32.270495   68248 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:41:32.405567   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:41:33.205712   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.050454437s)
	I0815 18:41:33.205772   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.205785   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.205793   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.079759984s)
	I0815 18:41:33.205826   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.205838   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206153   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.206169   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206184   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206194   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206200   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.206205   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206210   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206218   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.206202   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.206226   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206415   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206421   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206430   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206432   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.245033   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.245061   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.245328   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.245343   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.651886   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246273862s)
	I0815 18:41:33.651945   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.651960   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.652264   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.652307   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.652311   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.652326   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.652335   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.652618   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.652640   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.652650   68248 addons.go:475] Verifying addon metrics-server=true in "embed-certs-555028"
	I0815 18:41:33.652697   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.654487   68248 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 18:41:29.848462   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:31.850877   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:33.655868   68248 addons.go:510] duration metric: took 1.89588756s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 18:41:34.044605   68248 pod_ready.go:103] pod "etcd-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:34.538170   68248 pod_ready.go:93] pod "etcd-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.538199   68248 pod_ready.go:82] duration metric: took 2.506135047s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.538212   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.543160   68248 pod_ready.go:93] pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.543182   68248 pod_ready.go:82] duration metric: took 4.962289ms for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.543195   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.547126   68248 pod_ready.go:93] pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.547144   68248 pod_ready.go:82] duration metric: took 3.94279ms for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.547152   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:36.553459   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:37.555276   68248 pod_ready.go:93] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:37.555299   68248 pod_ready.go:82] duration metric: took 3.008140869s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:37.555307   68248 pod_ready.go:39] duration metric: took 5.535754922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:37.555330   68248 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:37.555378   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:37.575318   68248 api_server.go:72] duration metric: took 5.815371975s to wait for apiserver process to appear ...
	I0815 18:41:37.575344   68248 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:41:37.575361   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:41:37.580989   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I0815 18:41:37.582142   68248 api_server.go:141] control plane version: v1.31.0
	I0815 18:41:37.582164   68248 api_server.go:131] duration metric: took 6.812732ms to wait for apiserver health ...
	I0815 18:41:37.582174   68248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:41:37.589334   68248 system_pods.go:59] 9 kube-system pods found
	I0815 18:41:37.589366   68248 system_pods.go:61] "coredns-6f6b679f8f-mf6q4" [a5f7f959-715b-48a1-9f85-f267614182f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:41:37.589377   68248 system_pods.go:61] "coredns-6f6b679f8f-rc947" [3d041322-9d6b-4f46-8f58-e2991f34a297] Running
	I0815 18:41:37.589385   68248 system_pods.go:61] "etcd-embed-certs-555028" [8b533be4-dc0d-4b5e-af13-4efde0ddca33] Running
	I0815 18:41:37.589390   68248 system_pods.go:61] "kube-apiserver-embed-certs-555028" [6cbda2fc-5bf8-42d3-acee-fbf45de39d08] Running
	I0815 18:41:37.589397   68248 system_pods.go:61] "kube-controller-manager-embed-certs-555028" [e1246479-31dd-4437-b32f-4709fa627284] Running
	I0815 18:41:37.589403   68248 system_pods.go:61] "kube-proxy-ktczt" [f5e5b692-edd5-48fd-879b-7b8da4dea9fd] Running
	I0815 18:41:37.589410   68248 system_pods.go:61] "kube-scheduler-embed-certs-555028" [046100d7-8f69-4bff-8d48-c088c27b7601] Running
	I0815 18:41:37.589422   68248 system_pods.go:61] "metrics-server-6867b74b74-zkpx5" [92e18af9-7bd1-4891-b551-06ba4b293560] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:37.589431   68248 system_pods.go:61] "storage-provisioner" [d6979830-492e-4ef7-960f-2d4756de1c8f] Running
	I0815 18:41:37.589439   68248 system_pods.go:74] duration metric: took 7.257758ms to wait for pod list to return data ...
	I0815 18:41:37.589450   68248 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:41:37.592468   68248 default_sa.go:45] found service account: "default"
	I0815 18:41:37.592500   68248 default_sa.go:55] duration metric: took 3.029278ms for default service account to be created ...
	I0815 18:41:37.592511   68248 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:41:37.597697   68248 system_pods.go:86] 9 kube-system pods found
	I0815 18:41:37.597725   68248 system_pods.go:89] "coredns-6f6b679f8f-mf6q4" [a5f7f959-715b-48a1-9f85-f267614182f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:41:37.597730   68248 system_pods.go:89] "coredns-6f6b679f8f-rc947" [3d041322-9d6b-4f46-8f58-e2991f34a297] Running
	I0815 18:41:37.597736   68248 system_pods.go:89] "etcd-embed-certs-555028" [8b533be4-dc0d-4b5e-af13-4efde0ddca33] Running
	I0815 18:41:37.597740   68248 system_pods.go:89] "kube-apiserver-embed-certs-555028" [6cbda2fc-5bf8-42d3-acee-fbf45de39d08] Running
	I0815 18:41:37.597744   68248 system_pods.go:89] "kube-controller-manager-embed-certs-555028" [e1246479-31dd-4437-b32f-4709fa627284] Running
	I0815 18:41:37.597747   68248 system_pods.go:89] "kube-proxy-ktczt" [f5e5b692-edd5-48fd-879b-7b8da4dea9fd] Running
	I0815 18:41:37.597751   68248 system_pods.go:89] "kube-scheduler-embed-certs-555028" [046100d7-8f69-4bff-8d48-c088c27b7601] Running
	I0815 18:41:37.597756   68248 system_pods.go:89] "metrics-server-6867b74b74-zkpx5" [92e18af9-7bd1-4891-b551-06ba4b293560] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:37.597763   68248 system_pods.go:89] "storage-provisioner" [d6979830-492e-4ef7-960f-2d4756de1c8f] Running
	I0815 18:41:37.597769   68248 system_pods.go:126] duration metric: took 5.252997ms to wait for k8s-apps to be running ...
	I0815 18:41:37.597779   68248 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:41:37.597819   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:37.616004   68248 system_svc.go:56] duration metric: took 18.217091ms WaitForService to wait for kubelet
	I0815 18:41:37.616032   68248 kubeadm.go:582] duration metric: took 5.856091444s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:41:37.616049   68248 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:41:37.619195   68248 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:41:37.619215   68248 node_conditions.go:123] node cpu capacity is 2
	I0815 18:41:37.619223   68248 node_conditions.go:105] duration metric: took 3.169759ms to run NodePressure ...
	I0815 18:41:37.619234   68248 start.go:241] waiting for startup goroutines ...
	I0815 18:41:37.619242   68248 start.go:246] waiting for cluster config update ...
	I0815 18:41:37.619253   68248 start.go:255] writing updated cluster config ...
	I0815 18:41:37.619520   68248 ssh_runner.go:195] Run: rm -f paused
	I0815 18:41:37.669469   68248 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:41:37.671485   68248 out.go:177] * Done! kubectl is now configured to use "embed-certs-555028" cluster and "default" namespace by default
	I0815 18:41:34.350702   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:36.849248   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:39.348684   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:41.349379   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:43.848932   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:46.348801   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:48.349736   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:50.848728   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:52.850583   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:57.184855   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:41:57.185437   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:41:57.185667   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:41:54.851200   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:57.349542   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:42:02.186077   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:02.186272   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:41:59.349724   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:59.349748   67936 pod_ready.go:82] duration metric: took 4m0.007281981s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	E0815 18:41:59.349757   67936 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 18:41:59.349763   67936 pod_ready.go:39] duration metric: took 4m1.606987494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:59.349779   67936 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:59.349802   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:59.349844   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:59.395509   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:41:59.395541   67936 cri.go:89] found id: ""
	I0815 18:41:59.395552   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:41:59.395608   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.400063   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:59.400140   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:59.435356   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:41:59.435379   67936 cri.go:89] found id: ""
	I0815 18:41:59.435386   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:41:59.435431   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.440159   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:59.440213   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:59.479810   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:41:59.479841   67936 cri.go:89] found id: ""
	I0815 18:41:59.479851   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:41:59.479907   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.484341   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:59.484394   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:59.521077   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:41:59.521104   67936 cri.go:89] found id: ""
	I0815 18:41:59.521114   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:41:59.521168   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.525075   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:59.525131   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:59.564058   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:41:59.564084   67936 cri.go:89] found id: ""
	I0815 18:41:59.564093   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:41:59.564150   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.568668   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:59.568734   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:59.604385   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:41:59.604406   67936 cri.go:89] found id: ""
	I0815 18:41:59.604416   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:41:59.604473   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.609023   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:59.609095   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:59.646289   67936 cri.go:89] found id: ""
	I0815 18:41:59.646334   67936 logs.go:276] 0 containers: []
	W0815 18:41:59.646346   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:59.646355   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:59.646422   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:59.681861   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:41:59.681889   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:41:59.681895   67936 cri.go:89] found id: ""
	I0815 18:41:59.681903   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:41:59.681951   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.686379   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.690328   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:59.690353   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:59.759302   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:41:59.759338   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:41:59.798249   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:41:59.798276   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:41:59.834097   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:41:59.834129   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:41:59.885365   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:41:59.885398   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:41:59.923013   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:59.923038   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:59.938162   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:59.938192   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:00.077340   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:00.077377   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:00.122292   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:00.122323   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:00.165209   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:00.165235   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:00.201278   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:00.201317   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:00.238007   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:00.238042   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:00.709997   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:00.710043   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:03.252351   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:42:03.268074   67936 api_server.go:72] duration metric: took 4m12.770065297s to wait for apiserver process to appear ...
	I0815 18:42:03.268104   67936 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:42:03.268159   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:42:03.268227   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:42:03.305890   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:03.305913   67936 cri.go:89] found id: ""
	I0815 18:42:03.305923   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:42:03.305981   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.309958   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:42:03.310019   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:42:03.344602   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:03.344630   67936 cri.go:89] found id: ""
	I0815 18:42:03.344639   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:42:03.344696   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.349261   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:42:03.349317   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:42:03.383892   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:03.383912   67936 cri.go:89] found id: ""
	I0815 18:42:03.383919   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:42:03.383968   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.388158   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:42:03.388219   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:42:03.423264   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:03.423293   67936 cri.go:89] found id: ""
	I0815 18:42:03.423303   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:42:03.423352   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.427436   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:42:03.427496   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:42:03.470792   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:03.470819   67936 cri.go:89] found id: ""
	I0815 18:42:03.470829   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:42:03.470890   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.475884   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:42:03.475956   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:42:03.513081   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:03.513103   67936 cri.go:89] found id: ""
	I0815 18:42:03.513110   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:42:03.513161   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.517913   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:42:03.517985   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:42:03.556149   67936 cri.go:89] found id: ""
	I0815 18:42:03.556180   67936 logs.go:276] 0 containers: []
	W0815 18:42:03.556191   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:42:03.556199   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:42:03.556257   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:42:03.595987   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:03.596015   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:03.596021   67936 cri.go:89] found id: ""
	I0815 18:42:03.596030   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:42:03.596112   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.600430   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.604422   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:42:03.604443   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:42:03.676629   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:42:03.676665   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:03.717487   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:03.717514   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:03.755606   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:42:03.755632   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:03.815152   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:03.815187   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:03.857853   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:03.857882   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:04.296939   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:42:04.296983   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:42:04.312346   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:42:04.312373   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:04.424132   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:04.424162   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:04.482298   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:04.482326   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:04.526805   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:42:04.526832   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:04.564842   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:42:04.564871   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:04.602297   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:04.602323   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.137972   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:42:07.143165   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 200:
	ok
	I0815 18:42:07.144155   67936 api_server.go:141] control plane version: v1.31.0
	I0815 18:42:07.144174   67936 api_server.go:131] duration metric: took 3.876063215s to wait for apiserver health ...
	I0815 18:42:07.144182   67936 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:42:07.144201   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:42:07.144243   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:42:07.185685   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:07.185709   67936 cri.go:89] found id: ""
	I0815 18:42:07.185717   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:42:07.185782   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.190086   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:42:07.190179   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:42:07.233020   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:07.233044   67936 cri.go:89] found id: ""
	I0815 18:42:07.233053   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:42:07.233114   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.237639   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:42:07.237698   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:42:07.277613   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:07.277642   67936 cri.go:89] found id: ""
	I0815 18:42:07.277652   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:42:07.277714   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.282273   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:42:07.282346   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:42:07.324972   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:07.325003   67936 cri.go:89] found id: ""
	I0815 18:42:07.325013   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:42:07.325071   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.329402   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:42:07.329470   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:42:07.369812   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:07.369840   67936 cri.go:89] found id: ""
	I0815 18:42:07.369849   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:42:07.369902   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.373993   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:42:07.374055   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:42:07.412036   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:07.412062   67936 cri.go:89] found id: ""
	I0815 18:42:07.412072   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:42:07.412145   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.416191   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:42:07.416263   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:42:07.457677   67936 cri.go:89] found id: ""
	I0815 18:42:07.457710   67936 logs.go:276] 0 containers: []
	W0815 18:42:07.457721   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:42:07.457728   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:42:07.457792   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:42:07.498173   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:07.498199   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.498204   67936 cri.go:89] found id: ""
	I0815 18:42:07.498210   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:42:07.498268   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.502704   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.506501   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:42:07.506520   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:07.542685   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:07.542713   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:07.584070   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:42:07.584097   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:07.634780   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:07.634812   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.669410   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:07.669436   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:08.062406   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:42:08.062454   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:42:08.077171   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:42:08.077209   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:08.186125   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:08.186158   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:08.229621   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:42:08.229655   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:08.266791   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:08.266818   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:08.314172   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:42:08.314197   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:42:08.388793   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:08.388837   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:08.438287   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:42:08.438317   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:10.990845   67936 system_pods.go:59] 8 kube-system pods found
	I0815 18:42:10.990875   67936 system_pods.go:61] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running
	I0815 18:42:10.990879   67936 system_pods.go:61] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running
	I0815 18:42:10.990883   67936 system_pods.go:61] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running
	I0815 18:42:10.990887   67936 system_pods.go:61] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running
	I0815 18:42:10.990890   67936 system_pods.go:61] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running
	I0815 18:42:10.990894   67936 system_pods.go:61] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running
	I0815 18:42:10.990900   67936 system_pods.go:61] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:42:10.990905   67936 system_pods.go:61] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running
	I0815 18:42:10.990913   67936 system_pods.go:74] duration metric: took 3.846725869s to wait for pod list to return data ...
	I0815 18:42:10.990919   67936 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:42:10.993933   67936 default_sa.go:45] found service account: "default"
	I0815 18:42:10.993958   67936 default_sa.go:55] duration metric: took 3.032805ms for default service account to be created ...
	I0815 18:42:10.993968   67936 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:42:10.998531   67936 system_pods.go:86] 8 kube-system pods found
	I0815 18:42:10.998553   67936 system_pods.go:89] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running
	I0815 18:42:10.998558   67936 system_pods.go:89] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running
	I0815 18:42:10.998562   67936 system_pods.go:89] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running
	I0815 18:42:10.998567   67936 system_pods.go:89] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running
	I0815 18:42:10.998570   67936 system_pods.go:89] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running
	I0815 18:42:10.998575   67936 system_pods.go:89] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running
	I0815 18:42:10.998582   67936 system_pods.go:89] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:42:10.998586   67936 system_pods.go:89] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running
	I0815 18:42:10.998592   67936 system_pods.go:126] duration metric: took 4.619003ms to wait for k8s-apps to be running ...
	I0815 18:42:10.998598   67936 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:42:10.998638   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:42:11.015236   67936 system_svc.go:56] duration metric: took 16.627802ms WaitForService to wait for kubelet
	I0815 18:42:11.015260   67936 kubeadm.go:582] duration metric: took 4m20.517256799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:42:11.015280   67936 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:42:11.018544   67936 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:42:11.018570   67936 node_conditions.go:123] node cpu capacity is 2
	I0815 18:42:11.018584   67936 node_conditions.go:105] duration metric: took 3.298753ms to run NodePressure ...
	I0815 18:42:11.018598   67936 start.go:241] waiting for startup goroutines ...
	I0815 18:42:11.018611   67936 start.go:246] waiting for cluster config update ...
	I0815 18:42:11.018626   67936 start.go:255] writing updated cluster config ...
	I0815 18:42:11.018907   67936 ssh_runner.go:195] Run: rm -f paused
	I0815 18:42:11.065371   67936 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:42:11.067513   67936 out.go:177] * Done! kubectl is now configured to use "no-preload-599042" cluster and "default" namespace by default
	I0815 18:42:12.186839   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:12.187041   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:42:32.187938   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:32.188123   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.189799   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:43:12.190012   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.190023   68713 kubeadm.go:310] 
	I0815 18:43:12.190075   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:43:12.190133   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:43:12.190148   68713 kubeadm.go:310] 
	I0815 18:43:12.190205   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:43:12.190265   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:43:12.190394   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:43:12.190403   68713 kubeadm.go:310] 
	I0815 18:43:12.190523   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:43:12.190571   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:43:12.190627   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:43:12.190636   68713 kubeadm.go:310] 
	I0815 18:43:12.190772   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:43:12.190928   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:43:12.190950   68713 kubeadm.go:310] 
	I0815 18:43:12.191108   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:43:12.191218   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:43:12.191344   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:43:12.191478   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:43:12.191504   68713 kubeadm.go:310] 
	I0815 18:43:12.192283   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:43:12.192421   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:43:12.192523   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0815 18:43:12.192655   68713 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 18:43:12.192699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:43:12.658571   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:43:12.675797   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:43:12.687340   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:43:12.687370   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:43:12.687422   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:43:12.698401   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:43:12.698464   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:43:12.709632   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:43:12.720330   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:43:12.720386   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:43:12.731593   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.742122   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:43:12.742185   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.753042   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:43:12.762799   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:43:12.762855   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:43:12.772788   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:43:12.987927   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:45:08.956975   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:45:08.957069   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 18:45:08.958834   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:45:08.958904   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:45:08.958993   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:45:08.959133   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:45:08.959280   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:45:08.959376   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:45:08.961205   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:45:08.961294   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:45:08.961352   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:45:08.961424   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:45:08.961475   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:45:08.961536   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:45:08.961581   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:45:08.961637   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:45:08.961689   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:45:08.961795   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:45:08.961910   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:45:08.961971   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:45:08.962030   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:45:08.962078   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:45:08.962127   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:45:08.962214   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:45:08.962316   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:45:08.962448   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:45:08.962565   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:45:08.962626   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:45:08.962724   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:45:08.964403   68713 out.go:235]   - Booting up control plane ...
	I0815 18:45:08.964526   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:45:08.964631   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:45:08.964736   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:45:08.964866   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:45:08.965043   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:45:08.965121   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:45:08.965225   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965418   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965508   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965703   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965766   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965919   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965981   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966140   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966200   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966381   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966389   68713 kubeadm.go:310] 
	I0815 18:45:08.966438   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:45:08.966473   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:45:08.966481   68713 kubeadm.go:310] 
	I0815 18:45:08.966533   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:45:08.966580   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:45:08.966711   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:45:08.966718   68713 kubeadm.go:310] 
	I0815 18:45:08.966844   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:45:08.966900   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:45:08.966948   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:45:08.966958   68713 kubeadm.go:310] 
	I0815 18:45:08.967082   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:45:08.967201   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:45:08.967214   68713 kubeadm.go:310] 
	I0815 18:45:08.967341   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:45:08.967450   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:45:08.967548   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:45:08.967646   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:45:08.967678   68713 kubeadm.go:310] 
	I0815 18:45:08.967716   68713 kubeadm.go:394] duration metric: took 7m56.388213745s to StartCluster
	I0815 18:45:08.967768   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:45:08.967834   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:45:09.013913   68713 cri.go:89] found id: ""
	I0815 18:45:09.013943   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.013954   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:45:09.013961   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:45:09.014030   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:45:09.051370   68713 cri.go:89] found id: ""
	I0815 18:45:09.051395   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.051403   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:45:09.051409   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:45:09.051477   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:45:09.086615   68713 cri.go:89] found id: ""
	I0815 18:45:09.086646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.086653   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:45:09.086659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:45:09.086708   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:45:09.122335   68713 cri.go:89] found id: ""
	I0815 18:45:09.122370   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.122381   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:45:09.122389   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:45:09.122453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:45:09.163207   68713 cri.go:89] found id: ""
	I0815 18:45:09.163232   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.163241   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:45:09.163247   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:45:09.163308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:45:09.199396   68713 cri.go:89] found id: ""
	I0815 18:45:09.199426   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.199437   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:45:09.199444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:45:09.199504   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:45:09.235073   68713 cri.go:89] found id: ""
	I0815 18:45:09.235101   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.235112   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:45:09.235120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:45:09.235180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:45:09.271614   68713 cri.go:89] found id: ""
	I0815 18:45:09.271646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.271659   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:45:09.271671   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:45:09.271686   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:45:09.372192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:45:09.372214   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:45:09.372231   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:45:09.496743   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:45:09.496780   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:45:09.540434   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:45:09.540471   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:45:09.595546   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:45:09.595584   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 18:45:09.609831   68713 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 18:45:09.609885   68713 out.go:270] * 
	W0815 18:45:09.609942   68713 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.609956   68713 out.go:270] * 
	W0815 18:45:09.610794   68713 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 18:45:09.614213   68713 out.go:201] 
	W0815 18:45:09.615379   68713 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.615420   68713 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 18:45:09.615437   68713 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 18:45:09.616840   68713 out.go:201] 
	
	
	==> CRI-O <==
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.100030659Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=56a985b8-1b5c-4322-a6da-e4afbbe9748f name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.100290844Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747099078648476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 593f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f67889c939c1c28f9d604fde6516fabfbdf1713a402fc1bb229d11db5af0a05,PodSandboxId:b2fbf56a4f219ec0cb5f6103ff5aa805c8ece23e530c5591cbcc84a7042479c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747078754762854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38120fa0-c110-4003-a0a2-ecf726f1a3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c,PodSandboxId:15895665850f1469a24f3cac28ff257e2468adfd83ef2d438062547e1710e688,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747075914637405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kpq9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9592b56d-a037-4212-86f2-29e5824626fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747068262730483,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
93f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791,PodSandboxId:0758c39c1907e8b7b52e57c51af54b47b5e46ed50dd5b2498463c979fceb45de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747068312810184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwb9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f286e9d-3035-4280-adff-d3ca5653c2
f8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27,PodSandboxId:1e552e5c3ce5d5d07939e63eaabe226524c2b54591bfc590eeae2d88cf4a2735,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747063560326166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92824f436589abf4cecd2cad2981043b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de,PodSandboxId:ae7eb74e81608640ff66458131e72a19d7976c4944b3a7a1c2b6f85a2f30277f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747063655462864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef872432fb4e315dd3151104265c9da6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f,PodSandboxId:de630a3983fd53e5b1a4ec27b5fa23dcbb61f069dbea8afcf8c1d8ef3ad6bb3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747063480456290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567329cb6993a54a0826ef2ad1abb690,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f,PodSandboxId:d20e0818100ae91fbf69e7d9cf3a3b7c8896b9df3a05deed98f248ddae1876e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747063425442460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202e22fcf9be3034b0f682399dce7ac3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=56a985b8-1b5c-4322-a6da-e4afbbe9748f name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.112442364Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=2120efc3-a97f-4641-ba47-2bac71faa0b2 name=/runtime.v1.RuntimeService/Status
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.112502402Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=2120efc3-a97f-4641-ba47-2bac71faa0b2 name=/runtime.v1.RuntimeService/Status
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.142652179Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8dcc657-8897-4581-90c8-631a619f93b9 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.142738831Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8dcc657-8897-4581-90c8-631a619f93b9 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.144200350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b494ab8-4c09-4672-8f3a-ba22ada4de48 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.144524105Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747873144503214,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b494ab8-4c09-4672-8f3a-ba22ada4de48 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.145096459Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b9a6d22-19cf-4304-8d42-68bc0f0b24ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.145151704Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b9a6d22-19cf-4304-8d42-68bc0f0b24ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.145363950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747099078648476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 593f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f67889c939c1c28f9d604fde6516fabfbdf1713a402fc1bb229d11db5af0a05,PodSandboxId:b2fbf56a4f219ec0cb5f6103ff5aa805c8ece23e530c5591cbcc84a7042479c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747078754762854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38120fa0-c110-4003-a0a2-ecf726f1a3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c,PodSandboxId:15895665850f1469a24f3cac28ff257e2468adfd83ef2d438062547e1710e688,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747075914637405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kpq9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9592b56d-a037-4212-86f2-29e5824626fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747068262730483,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
93f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791,PodSandboxId:0758c39c1907e8b7b52e57c51af54b47b5e46ed50dd5b2498463c979fceb45de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747068312810184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwb9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f286e9d-3035-4280-adff-d3ca5653c2
f8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27,PodSandboxId:1e552e5c3ce5d5d07939e63eaabe226524c2b54591bfc590eeae2d88cf4a2735,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747063560326166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92824f436589abf4cecd2cad2981043b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de,PodSandboxId:ae7eb74e81608640ff66458131e72a19d7976c4944b3a7a1c2b6f85a2f30277f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747063655462864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef872432fb4e315dd3151104265c9da6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f,PodSandboxId:de630a3983fd53e5b1a4ec27b5fa23dcbb61f069dbea8afcf8c1d8ef3ad6bb3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747063480456290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567329cb6993a54a0826ef2ad1abb690,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f,PodSandboxId:d20e0818100ae91fbf69e7d9cf3a3b7c8896b9df3a05deed98f248ddae1876e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747063425442460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202e22fcf9be3034b0f682399dce7ac3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b9a6d22-19cf-4304-8d42-68bc0f0b24ea name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.184788372Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cca64f28-af83-41d0-93b1-4da58849559e name=/runtime.v1.RuntimeService/Version
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.184857395Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cca64f28-af83-41d0-93b1-4da58849559e name=/runtime.v1.RuntimeService/Version
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.186278618Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=786b0fca-caf4-4573-bb3f-1a489ba9e8a1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.186660073Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747873186635581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=786b0fca-caf4-4573-bb3f-1a489ba9e8a1 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.187161723Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db5fc310-3271-4b51-b514-bb4db8d22c33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.187213193Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db5fc310-3271-4b51-b514-bb4db8d22c33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.187391879Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747099078648476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 593f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f67889c939c1c28f9d604fde6516fabfbdf1713a402fc1bb229d11db5af0a05,PodSandboxId:b2fbf56a4f219ec0cb5f6103ff5aa805c8ece23e530c5591cbcc84a7042479c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747078754762854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38120fa0-c110-4003-a0a2-ecf726f1a3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c,PodSandboxId:15895665850f1469a24f3cac28ff257e2468adfd83ef2d438062547e1710e688,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747075914637405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kpq9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9592b56d-a037-4212-86f2-29e5824626fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747068262730483,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
93f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791,PodSandboxId:0758c39c1907e8b7b52e57c51af54b47b5e46ed50dd5b2498463c979fceb45de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747068312810184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwb9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f286e9d-3035-4280-adff-d3ca5653c2
f8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27,PodSandboxId:1e552e5c3ce5d5d07939e63eaabe226524c2b54591bfc590eeae2d88cf4a2735,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747063560326166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92824f436589abf4cecd2cad2981043b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de,PodSandboxId:ae7eb74e81608640ff66458131e72a19d7976c4944b3a7a1c2b6f85a2f30277f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747063655462864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef872432fb4e315dd3151104265c9da6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f,PodSandboxId:de630a3983fd53e5b1a4ec27b5fa23dcbb61f069dbea8afcf8c1d8ef3ad6bb3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747063480456290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567329cb6993a54a0826ef2ad1abb690,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f,PodSandboxId:d20e0818100ae91fbf69e7d9cf3a3b7c8896b9df3a05deed98f248ddae1876e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747063425442460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202e22fcf9be3034b0f682399dce7ac3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db5fc310-3271-4b51-b514-bb4db8d22c33 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.222954450Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=94cdc0f4-1523-4869-ba67-5a1a3cb9348f name=/runtime.v1.RuntimeService/Version
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.223028221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=94cdc0f4-1523-4869-ba67-5a1a3cb9348f name=/runtime.v1.RuntimeService/Version
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.224515557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=661f975b-fc44-4418-b6bb-c6429f766318 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.224930331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747873224905073,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=661f975b-fc44-4418-b6bb-c6429f766318 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.225398956Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=74b65f13-c8f0-4e4e-ae9d-9ca237bfc502 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.225456633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=74b65f13-c8f0-4e4e-ae9d-9ca237bfc502 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:51:13 no-preload-599042 crio[726]: time="2024-08-15 18:51:13.225713118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747099078648476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 593f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f67889c939c1c28f9d604fde6516fabfbdf1713a402fc1bb229d11db5af0a05,PodSandboxId:b2fbf56a4f219ec0cb5f6103ff5aa805c8ece23e530c5591cbcc84a7042479c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747078754762854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38120fa0-c110-4003-a0a2-ecf726f1a3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c,PodSandboxId:15895665850f1469a24f3cac28ff257e2468adfd83ef2d438062547e1710e688,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747075914637405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kpq9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9592b56d-a037-4212-86f2-29e5824626fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747068262730483,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
93f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791,PodSandboxId:0758c39c1907e8b7b52e57c51af54b47b5e46ed50dd5b2498463c979fceb45de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747068312810184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwb9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f286e9d-3035-4280-adff-d3ca5653c2
f8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27,PodSandboxId:1e552e5c3ce5d5d07939e63eaabe226524c2b54591bfc590eeae2d88cf4a2735,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747063560326166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92824f436589abf4cecd2cad2981043b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de,PodSandboxId:ae7eb74e81608640ff66458131e72a19d7976c4944b3a7a1c2b6f85a2f30277f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747063655462864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef872432fb4e315dd3151104265c9da6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f,PodSandboxId:de630a3983fd53e5b1a4ec27b5fa23dcbb61f069dbea8afcf8c1d8ef3ad6bb3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747063480456290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567329cb6993a54a0826ef2ad1abb690,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f,PodSandboxId:d20e0818100ae91fbf69e7d9cf3a3b7c8896b9df3a05deed98f248ddae1876e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747063425442460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202e22fcf9be3034b0f682399dce7ac3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=74b65f13-c8f0-4e4e-ae9d-9ca237bfc502 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	000b1f65df4e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   d42babab0be95       storage-provisioner
	8f67889c939c1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   b2fbf56a4f219       busybox
	ba61cbc99841c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   15895665850f1       coredns-6f6b679f8f-kpq9m
	66df56dcd33cf       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      13 minutes ago      Running             kube-proxy                1                   0758c39c1907e       kube-proxy-bwb9h
	1a53d726afaa5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   d42babab0be95       storage-provisioner
	f93d6e3cca40c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   ae7eb74e81608       etcd-no-preload-599042
	74f2072bea476       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      13 minutes ago      Running             kube-scheduler            1                   1e552e5c3ce5d       kube-scheduler-no-preload-599042
	831a14c2b0bb2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      13 minutes ago      Running             kube-apiserver            1                   de630a3983fd5       kube-apiserver-no-preload-599042
	c4afb41627fd6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      13 minutes ago      Running             kube-controller-manager   1                   d20e0818100ae       kube-controller-manager-no-preload-599042
	
	
	==> coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53230 - 59055 "HINFO IN 998974764882245978.2108705576189184450. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015086063s
	
	
	==> describe nodes <==
	Name:               no-preload-599042
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-599042
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=no-preload-599042
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T18_28_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 18:28:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-599042
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:51:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 18:48:29 +0000   Thu, 15 Aug 2024 18:28:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 18:48:29 +0000   Thu, 15 Aug 2024 18:28:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 18:48:29 +0000   Thu, 15 Aug 2024 18:28:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 18:48:29 +0000   Thu, 15 Aug 2024 18:37:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.14
	  Hostname:    no-preload-599042
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e198536b9a0e45afb82f8ee8d9f6ab80
	  System UUID:                e198536b-9a0e-45af-b82f-8ee8d9f6ab80
	  Boot ID:                    878ff641-9d9f-4cb1-ae56-44926fece655
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 coredns-6f6b679f8f-kpq9m                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     22m
	  kube-system                 etcd-no-preload-599042                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         22m
	  kube-system                 kube-apiserver-no-preload-599042             250m (12%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-controller-manager-no-preload-599042    200m (10%)    0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-proxy-bwb9h                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 kube-scheduler-no-preload-599042             100m (5%)     0 (0%)      0 (0%)           0 (0%)         22m
	  kube-system                 metrics-server-6867b74b74-djv7r              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         22m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node no-preload-599042 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node no-preload-599042 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node no-preload-599042 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node no-preload-599042 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node no-preload-599042 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node no-preload-599042 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node no-preload-599042 status is now: NodeReady
	  Normal  RegisteredNode           22m                node-controller  Node no-preload-599042 event: Registered Node no-preload-599042 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-599042 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-599042 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-599042 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-599042 event: Registered Node no-preload-599042 in Controller
	
	
	==> dmesg <==
	[Aug15 18:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.058135] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043981] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.167378] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.640733] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.591032] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.799338] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.060697] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055583] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.185665] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.120148] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.272466] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[ +16.290318] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.054796] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.103469] systemd-fstab-generator[1430]: Ignoring "noauto" option for root device
	[  +4.435828] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.595375] systemd-fstab-generator[2059]: Ignoring "noauto" option for root device
	[  +3.290422] kauditd_printk_skb: 61 callbacks suppressed
	[Aug15 18:38] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] <==
	{"level":"info","ts":"2024-08-15T18:37:44.074947Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:37:44.077325Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-15T18:37:44.078277Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"23d55004837fefad","initial-advertise-peer-urls":["https://192.168.72.14:2380"],"listen-peer-urls":["https://192.168.72.14:2380"],"advertise-client-urls":["https://192.168.72.14:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.14:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-15T18:37:44.078568Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-15T18:37:44.077728Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.14:2380"}
	{"level":"info","ts":"2024-08-15T18:37:44.078763Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.14:2380"}
	{"level":"info","ts":"2024-08-15T18:37:45.811120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23d55004837fefad is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-15T18:37:45.811223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23d55004837fefad became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-15T18:37:45.811282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23d55004837fefad received MsgPreVoteResp from 23d55004837fefad at term 2"}
	{"level":"info","ts":"2024-08-15T18:37:45.811315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23d55004837fefad became candidate at term 3"}
	{"level":"info","ts":"2024-08-15T18:37:45.811339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23d55004837fefad received MsgVoteResp from 23d55004837fefad at term 3"}
	{"level":"info","ts":"2024-08-15T18:37:45.811374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"23d55004837fefad became leader at term 3"}
	{"level":"info","ts":"2024-08-15T18:37:45.811406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 23d55004837fefad elected leader 23d55004837fefad at term 3"}
	{"level":"info","ts":"2024-08-15T18:37:45.825814Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"23d55004837fefad","local-member-attributes":"{Name:no-preload-599042 ClientURLs:[https://192.168.72.14:2379]}","request-path":"/0/members/23d55004837fefad/attributes","cluster-id":"9dd9b7dee622826f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T18:37:45.826134Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:37:45.826195Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:37:45.826634Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T18:37:45.826684Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T18:37:45.827274Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:37:45.827292Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:37:45.828189Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T18:37:45.828196Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.14:2379"}
	{"level":"info","ts":"2024-08-15T18:47:45.853959Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":839}
	{"level":"info","ts":"2024-08-15T18:47:45.868835Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":839,"took":"14.267097ms","hash":4177376985,"current-db-size-bytes":2752512,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2752512,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-08-15T18:47:45.868913Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4177376985,"revision":839,"compact-revision":-1}
	
	
	==> kernel <==
	 18:51:13 up 14 min,  0 users,  load average: 0.31, 0.14, 0.10
	Linux no-preload-599042 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] <==
	E0815 18:47:48.229784       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0815 18:47:48.230014       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 18:47:48.230907       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:47:48.232055       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 18:48:48.231758       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:48:48.232100       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0815 18:48:48.232195       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:48:48.232297       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 18:48:48.233172       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:48:48.234341       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 18:50:48.233368       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:50:48.233460       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0815 18:50:48.234563       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 18:50:48.234567       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:50:48.234771       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 18:50:48.235987       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] <==
	E0815 18:45:50.838097       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:45:51.307479       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:46:20.845757       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:46:21.315248       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:46:50.855745       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:46:51.322909       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:47:20.861835       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:47:21.332312       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:47:50.868682       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:47:51.340006       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:48:20.875339       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:48:21.348351       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:48:29.365431       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-599042"
	E0815 18:48:50.885321       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:48:51.357413       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:48:58.869431       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="233.014µs"
	I0815 18:49:12.865793       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="156.908µs"
	E0815 18:49:20.891036       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:49:21.365356       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:49:50.897003       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:49:51.373227       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:50:20.904660       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:50:21.383869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:50:50.911793       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:50:51.391146       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 18:37:48.527552       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 18:37:48.536176       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.14"]
	E0815 18:37:48.536361       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 18:37:48.572735       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 18:37:48.572782       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 18:37:48.572807       1 server_linux.go:169] "Using iptables Proxier"
	I0815 18:37:48.575705       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 18:37:48.576062       1 server.go:483] "Version info" version="v1.31.0"
	I0815 18:37:48.576088       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:37:48.577783       1 config.go:197] "Starting service config controller"
	I0815 18:37:48.577823       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 18:37:48.577844       1 config.go:104] "Starting endpoint slice config controller"
	I0815 18:37:48.577848       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 18:37:48.579381       1 config.go:326] "Starting node config controller"
	I0815 18:37:48.579410       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 18:37:48.678624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 18:37:48.678679       1 shared_informer.go:320] Caches are synced for service config
	I0815 18:37:48.679995       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] <==
	I0815 18:37:44.705906       1 serving.go:386] Generated self-signed cert in-memory
	W0815 18:37:47.165664       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 18:37:47.165848       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 18:37:47.165934       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 18:37:47.165959       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 18:37:47.253180       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 18:37:47.253329       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:37:47.256561       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 18:37:47.256766       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 18:37:47.256835       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 18:37:47.257161       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 18:37:47.357681       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 18:50:04 no-preload-599042 kubelet[1437]: E0815 18:50:04.851360    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-djv7r" podUID="3d03d5bc-31ed-4a75-8d75-627d40a2d8fc"
	Aug 15 18:50:13 no-preload-599042 kubelet[1437]: E0815 18:50:13.025813    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747813025378329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:13 no-preload-599042 kubelet[1437]: E0815 18:50:13.026127    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747813025378329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:15 no-preload-599042 kubelet[1437]: E0815 18:50:15.851302    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-djv7r" podUID="3d03d5bc-31ed-4a75-8d75-627d40a2d8fc"
	Aug 15 18:50:23 no-preload-599042 kubelet[1437]: E0815 18:50:23.028616    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747823027908328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:23 no-preload-599042 kubelet[1437]: E0815 18:50:23.028708    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747823027908328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:30 no-preload-599042 kubelet[1437]: E0815 18:50:30.854879    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-djv7r" podUID="3d03d5bc-31ed-4a75-8d75-627d40a2d8fc"
	Aug 15 18:50:33 no-preload-599042 kubelet[1437]: E0815 18:50:33.030775    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747833030432115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:33 no-preload-599042 kubelet[1437]: E0815 18:50:33.030827    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747833030432115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:41 no-preload-599042 kubelet[1437]: E0815 18:50:41.852105    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-djv7r" podUID="3d03d5bc-31ed-4a75-8d75-627d40a2d8fc"
	Aug 15 18:50:42 no-preload-599042 kubelet[1437]: E0815 18:50:42.870213    1437 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 18:50:42 no-preload-599042 kubelet[1437]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 18:50:42 no-preload-599042 kubelet[1437]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 18:50:42 no-preload-599042 kubelet[1437]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 18:50:42 no-preload-599042 kubelet[1437]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 18:50:43 no-preload-599042 kubelet[1437]: E0815 18:50:43.033122    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747843032689232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:43 no-preload-599042 kubelet[1437]: E0815 18:50:43.033170    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747843032689232,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:53 no-preload-599042 kubelet[1437]: E0815 18:50:53.035022    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747853034712106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:53 no-preload-599042 kubelet[1437]: E0815 18:50:53.035126    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747853034712106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:50:56 no-preload-599042 kubelet[1437]: E0815 18:50:56.851795    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-djv7r" podUID="3d03d5bc-31ed-4a75-8d75-627d40a2d8fc"
	Aug 15 18:51:03 no-preload-599042 kubelet[1437]: E0815 18:51:03.037072    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747863036645245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:51:03 no-preload-599042 kubelet[1437]: E0815 18:51:03.037371    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747863036645245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:51:09 no-preload-599042 kubelet[1437]: E0815 18:51:09.850746    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-djv7r" podUID="3d03d5bc-31ed-4a75-8d75-627d40a2d8fc"
	Aug 15 18:51:13 no-preload-599042 kubelet[1437]: E0815 18:51:13.039539    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747873039308641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:51:13 no-preload-599042 kubelet[1437]: E0815 18:51:13.039696    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723747873039308641,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] <==
	I0815 18:38:19.161510       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 18:38:19.173233       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 18:38:19.173340       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 18:38:19.181838       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 18:38:19.181993       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-599042_17dee5fe-21a1-403e-b470-19ab99791054!
	I0815 18:38:19.185118       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"878577f0-7b6e-4dac-8c6f-ccfc640f6556", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-599042_17dee5fe-21a1-403e-b470-19ab99791054 became leader
	I0815 18:38:19.282471       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-599042_17dee5fe-21a1-403e-b470-19ab99791054!
	
	
	==> storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] <==
	I0815 18:37:48.460152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0815 18:38:18.462790       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-599042 -n no-preload-599042
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-599042 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-djv7r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-599042 describe pod metrics-server-6867b74b74-djv7r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-599042 describe pod metrics-server-6867b74b74-djv7r: exit status 1 (63.444427ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-djv7r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-599042 describe pod metrics-server-6867b74b74-djv7r: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
E0815 18:47:47.733748   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
E0815 18:49:52.218771   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
E0815 18:52:47.733444   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-278865 -n old-k8s-version-278865
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-278865 -n old-k8s-version-278865: exit status 2 (221.160422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-278865" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865: exit status 2 (221.582595ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-278865 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-278865 logs -n 25: (1.610985145s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-729203                           | kubernetes-upgrade-729203    | jenkins | v1.33.1 | 15 Aug 24 18:26 UTC | 15 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-498665                              | stopped-upgrade-498665       | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:27 UTC |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-729203                           | kubernetes-upgrade-729203    | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:27 UTC |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-003860                              | cert-expiration-003860       | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-003860                              | cert-expiration-003860       | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-698209 | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | disable-driver-mounts-698209                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:29 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-599042             | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-555028            | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-423062  | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-278865        | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-599042                  | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC | 15 Aug 24 18:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-555028                 | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-423062       | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-278865             | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:32:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:32:52.788403   68713 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:32:52.788704   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788715   68713 out.go:358] Setting ErrFile to fd 2...
	I0815 18:32:52.788719   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788916   68713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:32:52.789431   68713 out.go:352] Setting JSON to false
	I0815 18:32:52.790297   68713 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8119,"bootTime":1723738654,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:32:52.790355   68713 start.go:139] virtualization: kvm guest
	I0815 18:32:52.792478   68713 out.go:177] * [old-k8s-version-278865] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:32:52.793818   68713 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:32:52.793864   68713 notify.go:220] Checking for updates...
	I0815 18:32:52.796618   68713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:32:52.797914   68713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:32:52.799054   68713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:32:52.800337   68713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:32:52.801448   68713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:32:52.803087   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:32:52.803465   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.803521   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.819013   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0815 18:32:52.819447   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.819966   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.819985   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.820284   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.820482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.822582   68713 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 18:32:52.824024   68713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:32:52.824380   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.824425   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.839486   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0815 18:32:52.839905   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.840345   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.840367   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.840730   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.840904   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.876811   68713 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:32:52.878075   68713 start.go:297] selected driver: kvm2
	I0815 18:32:52.878098   68713 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.878240   68713 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:32:52.878920   68713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.879001   68713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:32:52.894158   68713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:32:52.894895   68713 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:32:52.894953   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:32:52.894969   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:32:52.895020   68713 start.go:340] cluster config:
	{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.895203   68713 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.897304   68713 out.go:177] * Starting "old-k8s-version-278865" primary control-plane node in "old-k8s-version-278865" cluster
	I0815 18:32:51.348753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:32:52.898737   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:32:52.898785   68713 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:32:52.898795   68713 cache.go:56] Caching tarball of preloaded images
	I0815 18:32:52.898861   68713 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:32:52.898871   68713 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0815 18:32:52.898962   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:32:52.899159   68713 start.go:360] acquireMachinesLock for old-k8s-version-278865: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:32:57.424754   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:00.496786   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:06.576768   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:09.648759   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:15.728760   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:18.800783   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:24.880725   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:27.952781   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:34.032763   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:37.104737   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:43.184796   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:46.260701   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:52.336771   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:55.408745   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:01.488742   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:04.560759   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:10.640771   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:13.712753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:19.792795   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:22.864720   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:28.944769   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:32.016745   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:38.096783   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:41.168739   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:47.248802   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:50.320778   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:56.400717   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:59.472780   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:05.552762   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:08.624707   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:14.704753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:17.776748   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:23.856782   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:26.932742   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:33.008795   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:36.080807   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:42.160767   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:45.232800   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:51.312780   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:54.384719   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:00.464740   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:03.536736   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:06.540805   68248 start.go:364] duration metric: took 4m1.610543673s to acquireMachinesLock for "embed-certs-555028"
	I0815 18:36:06.540869   68248 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:06.540881   68248 fix.go:54] fixHost starting: 
	I0815 18:36:06.541241   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:06.541272   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:06.556680   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0815 18:36:06.557105   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:06.557518   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:36:06.557540   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:06.557831   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:06.558059   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:06.558202   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:36:06.559702   68248 fix.go:112] recreateIfNeeded on embed-certs-555028: state=Stopped err=<nil>
	I0815 18:36:06.559724   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	W0815 18:36:06.559877   68248 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:06.561410   68248 out.go:177] * Restarting existing kvm2 VM for "embed-certs-555028" ...
	I0815 18:36:06.538474   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:06.538508   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:36:06.538805   67936 buildroot.go:166] provisioning hostname "no-preload-599042"
	I0815 18:36:06.538831   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:36:06.539016   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:36:06.540664   67936 machine.go:96] duration metric: took 4m37.431349663s to provisionDockerMachine
	I0815 18:36:06.540702   67936 fix.go:56] duration metric: took 4m37.452150687s for fixHost
	I0815 18:36:06.540709   67936 start.go:83] releasing machines lock for "no-preload-599042", held for 4m37.452172562s
	W0815 18:36:06.540732   67936 start.go:714] error starting host: provision: host is not running
	W0815 18:36:06.540801   67936 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0815 18:36:06.540809   67936 start.go:729] Will try again in 5 seconds ...
	I0815 18:36:06.562384   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Start
	I0815 18:36:06.562537   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring networks are active...
	I0815 18:36:06.563252   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring network default is active
	I0815 18:36:06.563554   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring network mk-embed-certs-555028 is active
	I0815 18:36:06.563908   68248 main.go:141] libmachine: (embed-certs-555028) Getting domain xml...
	I0815 18:36:06.564614   68248 main.go:141] libmachine: (embed-certs-555028) Creating domain...
	I0815 18:36:07.763793   68248 main.go:141] libmachine: (embed-certs-555028) Waiting to get IP...
	I0815 18:36:07.764733   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:07.765099   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:07.765200   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:07.765085   69393 retry.go:31] will retry after 206.840107ms: waiting for machine to come up
	I0815 18:36:07.973596   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:07.974069   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:07.974093   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:07.974019   69393 retry.go:31] will retry after 319.002956ms: waiting for machine to come up
	I0815 18:36:08.294670   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:08.295125   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:08.295154   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:08.295073   69393 retry.go:31] will retry after 425.99373ms: waiting for machine to come up
	I0815 18:36:08.722549   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:08.722954   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:08.722985   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:08.722903   69393 retry.go:31] will retry after 428.077891ms: waiting for machine to come up
	I0815 18:36:09.152674   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:09.153155   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:09.153187   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:09.153108   69393 retry.go:31] will retry after 476.041155ms: waiting for machine to come up
	I0815 18:36:09.630963   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:09.631456   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:09.631485   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:09.631395   69393 retry.go:31] will retry after 751.179582ms: waiting for machine to come up
	I0815 18:36:11.542364   67936 start.go:360] acquireMachinesLock for no-preload-599042: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:36:10.384466   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:10.384888   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:10.384916   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:10.384842   69393 retry.go:31] will retry after 1.028202731s: waiting for machine to come up
	I0815 18:36:11.414905   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:11.415343   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:11.415373   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:11.415283   69393 retry.go:31] will retry after 1.129105535s: waiting for machine to come up
	I0815 18:36:12.545941   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:12.546365   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:12.546387   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:12.546320   69393 retry.go:31] will retry after 1.734323812s: waiting for machine to come up
	I0815 18:36:14.283247   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:14.283622   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:14.283653   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:14.283569   69393 retry.go:31] will retry after 1.657173562s: waiting for machine to come up
	I0815 18:36:15.943329   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:15.943730   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:15.943760   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:15.943669   69393 retry.go:31] will retry after 2.349664822s: waiting for machine to come up
	I0815 18:36:18.295797   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:18.296330   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:18.296363   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:18.296264   69393 retry.go:31] will retry after 2.889119284s: waiting for machine to come up
	I0815 18:36:21.186597   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:21.186983   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:21.187004   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:21.186945   69393 retry.go:31] will retry after 2.79101595s: waiting for machine to come up
	I0815 18:36:23.981271   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.981732   68248 main.go:141] libmachine: (embed-certs-555028) Found IP for machine: 192.168.50.234
	I0815 18:36:23.981761   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has current primary IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.981770   68248 main.go:141] libmachine: (embed-certs-555028) Reserving static IP address...
	I0815 18:36:23.982166   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "embed-certs-555028", mac: "52:54:00:5c:59:7b", ip: "192.168.50.234"} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:23.982189   68248 main.go:141] libmachine: (embed-certs-555028) DBG | skip adding static IP to network mk-embed-certs-555028 - found existing host DHCP lease matching {name: "embed-certs-555028", mac: "52:54:00:5c:59:7b", ip: "192.168.50.234"}
	I0815 18:36:23.982200   68248 main.go:141] libmachine: (embed-certs-555028) Reserved static IP address: 192.168.50.234
	I0815 18:36:23.982210   68248 main.go:141] libmachine: (embed-certs-555028) Waiting for SSH to be available...
	I0815 18:36:23.982220   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Getting to WaitForSSH function...
	I0815 18:36:23.984253   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.984578   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:23.984601   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.984696   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Using SSH client type: external
	I0815 18:36:23.984720   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa (-rw-------)
	I0815 18:36:23.984752   68248 main.go:141] libmachine: (embed-certs-555028) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:36:23.984763   68248 main.go:141] libmachine: (embed-certs-555028) DBG | About to run SSH command:
	I0815 18:36:23.984772   68248 main.go:141] libmachine: (embed-certs-555028) DBG | exit 0
	I0815 18:36:24.104618   68248 main.go:141] libmachine: (embed-certs-555028) DBG | SSH cmd err, output: <nil>: 
	I0815 18:36:24.105023   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetConfigRaw
	I0815 18:36:24.105694   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:24.108191   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.108532   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.108568   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.108844   68248 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/config.json ...
	I0815 18:36:24.109037   68248 machine.go:93] provisionDockerMachine start ...
	I0815 18:36:24.109055   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:24.109249   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.111363   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.111680   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.111725   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.111821   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.111989   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.112141   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.112277   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.112454   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.112662   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.112673   68248 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:36:24.208951   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:36:24.208986   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.209255   68248 buildroot.go:166] provisioning hostname "embed-certs-555028"
	I0815 18:36:24.209285   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.209514   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.212393   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.212850   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.212878   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.213010   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.213198   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.213340   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.213466   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.213663   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.213821   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.213832   68248 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-555028 && echo "embed-certs-555028" | sudo tee /etc/hostname
	I0815 18:36:24.327157   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-555028
	
	I0815 18:36:24.327191   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.330193   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.330577   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.330607   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.330824   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.331029   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.331174   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.331325   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.331508   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.331713   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.331732   68248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-555028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-555028/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-555028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:36:24.437909   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:24.437938   68248 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:36:24.437977   68248 buildroot.go:174] setting up certificates
	I0815 18:36:24.437987   68248 provision.go:84] configureAuth start
	I0815 18:36:24.437996   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.438264   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:24.440637   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.440961   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.440993   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.441089   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.443071   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.443415   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.443448   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.443562   68248 provision.go:143] copyHostCerts
	I0815 18:36:24.443622   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:36:24.443643   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:36:24.443726   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:36:24.443843   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:36:24.443855   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:36:24.443893   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:36:24.443968   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:36:24.443977   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:36:24.444007   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:36:24.444074   68248 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.embed-certs-555028 san=[127.0.0.1 192.168.50.234 embed-certs-555028 localhost minikube]
	I0815 18:36:24.507119   68248 provision.go:177] copyRemoteCerts
	I0815 18:36:24.507177   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:36:24.507202   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.509835   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.510230   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.510260   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.510403   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.510606   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.510735   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.510842   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:24.590623   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:36:24.615635   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 18:36:24.643400   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:36:24.670394   68248 provision.go:87] duration metric: took 232.396705ms to configureAuth
	I0815 18:36:24.670415   68248 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:36:24.670609   68248 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:24.670694   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.673303   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.673685   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.673721   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.673863   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.674050   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.674222   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.674354   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.674513   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.674673   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.674688   68248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:36:25.149223   68429 start.go:364] duration metric: took 3m59.233021018s to acquireMachinesLock for "default-k8s-diff-port-423062"
	I0815 18:36:25.149295   68429 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:25.149306   68429 fix.go:54] fixHost starting: 
	I0815 18:36:25.149757   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:25.149799   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:25.166940   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41811
	I0815 18:36:25.167342   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:25.167882   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:25.167910   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:25.168179   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:25.168383   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:25.168553   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:25.170072   68429 fix.go:112] recreateIfNeeded on default-k8s-diff-port-423062: state=Stopped err=<nil>
	I0815 18:36:25.170106   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	W0815 18:36:25.170263   68429 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:25.172091   68429 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-423062" ...
	I0815 18:36:25.173641   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Start
	I0815 18:36:25.173831   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring networks are active...
	I0815 18:36:25.174594   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring network default is active
	I0815 18:36:25.174981   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring network mk-default-k8s-diff-port-423062 is active
	I0815 18:36:25.175410   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Getting domain xml...
	I0815 18:36:25.176275   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Creating domain...
	I0815 18:36:24.928110   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:36:24.928140   68248 machine.go:96] duration metric: took 819.089931ms to provisionDockerMachine
	I0815 18:36:24.928156   68248 start.go:293] postStartSetup for "embed-certs-555028" (driver="kvm2")
	I0815 18:36:24.928170   68248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:36:24.928190   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:24.928513   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:36:24.928542   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.931301   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.931756   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.931799   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.931846   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.932028   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.932311   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.932477   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.011373   68248 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:36:25.015677   68248 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:36:25.015707   68248 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:36:25.015798   68248 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:36:25.015900   68248 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:36:25.016014   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:36:25.025465   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:25.049662   68248 start.go:296] duration metric: took 121.491742ms for postStartSetup
	I0815 18:36:25.049704   68248 fix.go:56] duration metric: took 18.508823511s for fixHost
	I0815 18:36:25.049728   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.052184   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.052538   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.052583   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.052718   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.052904   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.053099   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.053271   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.053438   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:25.053604   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:25.053614   68248 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:36:25.149075   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723746985.122186042
	
	I0815 18:36:25.149095   68248 fix.go:216] guest clock: 1723746985.122186042
	I0815 18:36:25.149103   68248 fix.go:229] Guest: 2024-08-15 18:36:25.122186042 +0000 UTC Remote: 2024-08-15 18:36:25.049708543 +0000 UTC m=+260.258232753 (delta=72.477499ms)
	I0815 18:36:25.149131   68248 fix.go:200] guest clock delta is within tolerance: 72.477499ms
	I0815 18:36:25.149135   68248 start.go:83] releasing machines lock for "embed-certs-555028", held for 18.608287436s
	I0815 18:36:25.149158   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.149408   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:25.152125   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.152542   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.152568   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.152742   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153260   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153439   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153539   68248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:36:25.153587   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.153639   68248 ssh_runner.go:195] Run: cat /version.json
	I0815 18:36:25.153659   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.156311   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156504   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156740   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.156769   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156847   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.156883   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.157040   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.157122   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.157303   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.157318   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.157473   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.157479   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.157647   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.157647   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.233725   68248 ssh_runner.go:195] Run: systemctl --version
	I0815 18:36:25.253737   68248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:36:25.402047   68248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:36:25.409250   68248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:36:25.409328   68248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:36:25.426491   68248 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:36:25.426514   68248 start.go:495] detecting cgroup driver to use...
	I0815 18:36:25.426580   68248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:36:25.445177   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:36:25.459432   68248 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:36:25.459512   68248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:36:25.473777   68248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:36:25.488144   68248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:36:25.627700   68248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:36:25.791278   68248 docker.go:233] disabling docker service ...
	I0815 18:36:25.791349   68248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:36:25.810146   68248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:36:25.825131   68248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:36:25.975457   68248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:36:26.106757   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:36:26.123053   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:36:26.142739   68248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:36:26.142804   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.153163   68248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:36:26.153217   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.163863   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.175028   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.192480   68248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:36:26.208933   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.219825   68248 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.245623   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.256645   68248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:36:26.265947   68248 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:36:26.266004   68248 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:36:26.278665   68248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:36:26.289519   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:26.423656   68248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:36:26.560919   68248 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:36:26.560996   68248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:36:26.565696   68248 start.go:563] Will wait 60s for crictl version
	I0815 18:36:26.565764   68248 ssh_runner.go:195] Run: which crictl
	I0815 18:36:26.569498   68248 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:36:26.609872   68248 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:36:26.609948   68248 ssh_runner.go:195] Run: crio --version
	I0815 18:36:26.645300   68248 ssh_runner.go:195] Run: crio --version
	I0815 18:36:26.681229   68248 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:36:26.682461   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:26.685495   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:26.686011   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:26.686037   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:26.686323   68248 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 18:36:26.690590   68248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:26.703512   68248 kubeadm.go:883] updating cluster {Name:embed-certs-555028 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:36:26.703679   68248 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:36:26.703748   68248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:26.740601   68248 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:36:26.740679   68248 ssh_runner.go:195] Run: which lz4
	I0815 18:36:26.744798   68248 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:36:26.748894   68248 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:36:26.748921   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:36:28.188174   68248 crio.go:462] duration metric: took 1.443420751s to copy over tarball
	I0815 18:36:28.188254   68248 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:36:26.428013   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting to get IP...
	I0815 18:36:26.428929   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.429397   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.429513   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.429391   69513 retry.go:31] will retry after 296.45967ms: waiting for machine to come up
	I0815 18:36:26.727871   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.728273   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.728298   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.728237   69513 retry.go:31] will retry after 258.379179ms: waiting for machine to come up
	I0815 18:36:26.988915   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.989398   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.989472   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.989374   69513 retry.go:31] will retry after 418.611169ms: waiting for machine to come up
	I0815 18:36:27.409905   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.410358   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.410398   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:27.410327   69513 retry.go:31] will retry after 566.642237ms: waiting for machine to come up
	I0815 18:36:27.978717   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.979183   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.979215   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:27.979125   69513 retry.go:31] will retry after 740.292473ms: waiting for machine to come up
	I0815 18:36:28.720587   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:28.720970   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:28.721008   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:28.720941   69513 retry.go:31] will retry after 610.435484ms: waiting for machine to come up
	I0815 18:36:29.333342   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:29.333696   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:29.333731   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:29.333632   69513 retry.go:31] will retry after 1.059086771s: waiting for machine to come up
	I0815 18:36:30.394125   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:30.394560   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:30.394589   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:30.394519   69513 retry.go:31] will retry after 1.279753887s: waiting for machine to come up
	I0815 18:36:30.309340   68248 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.121056035s)
	I0815 18:36:30.309382   68248 crio.go:469] duration metric: took 2.121176349s to extract the tarball
	I0815 18:36:30.309394   68248 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:36:30.346520   68248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:30.394771   68248 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:36:30.394789   68248 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:36:30.394799   68248 kubeadm.go:934] updating node { 192.168.50.234 8443 v1.31.0 crio true true} ...
	I0815 18:36:30.394951   68248 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-555028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:36:30.395033   68248 ssh_runner.go:195] Run: crio config
	I0815 18:36:30.439636   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:36:30.439663   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:30.439678   68248 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:36:30.439707   68248 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.234 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-555028 NodeName:embed-certs-555028 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:36:30.439899   68248 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-555028"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:36:30.439976   68248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:36:30.449774   68248 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:36:30.449842   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:36:30.458892   68248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0815 18:36:30.475171   68248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:36:30.490942   68248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0815 18:36:30.507498   68248 ssh_runner.go:195] Run: grep 192.168.50.234	control-plane.minikube.internal$ /etc/hosts
	I0815 18:36:30.511254   68248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:30.522772   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:30.646060   68248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:30.667948   68248 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028 for IP: 192.168.50.234
	I0815 18:36:30.667974   68248 certs.go:194] generating shared ca certs ...
	I0815 18:36:30.667994   68248 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:30.668178   68248 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:36:30.668231   68248 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:36:30.668244   68248 certs.go:256] generating profile certs ...
	I0815 18:36:30.668360   68248 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/client.key
	I0815 18:36:30.668442   68248 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.key.539203f3
	I0815 18:36:30.668524   68248 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.key
	I0815 18:36:30.668686   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:36:30.668725   68248 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:36:30.668737   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:36:30.668774   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:36:30.668807   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:36:30.668836   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:36:30.668941   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:30.669810   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:36:30.721245   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:36:30.753016   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:36:30.782005   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:36:30.815008   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 18:36:30.847615   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:36:30.871566   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:36:30.894778   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:36:30.919167   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:36:30.942597   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:36:30.965395   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:36:30.988959   68248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:36:31.005578   68248 ssh_runner.go:195] Run: openssl version
	I0815 18:36:31.011697   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:36:31.022496   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.027102   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.027154   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.033475   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:36:31.044793   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:36:31.055793   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.060642   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.060692   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.066544   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:36:31.077637   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:36:31.088468   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.093295   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.093347   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.098908   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:36:31.109856   68248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:36:31.114519   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:36:31.120709   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:36:31.126754   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:36:31.132917   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:36:31.138739   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:36:31.144785   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:36:31.150604   68248 kubeadm.go:392] StartCluster: {Name:embed-certs-555028 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:36:31.150702   68248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:36:31.150755   68248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:31.192152   68248 cri.go:89] found id: ""
	I0815 18:36:31.192253   68248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:36:31.203076   68248 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:36:31.203099   68248 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:36:31.203151   68248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:36:31.213659   68248 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:36:31.215070   68248 kubeconfig.go:125] found "embed-certs-555028" server: "https://192.168.50.234:8443"
	I0815 18:36:31.218243   68248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:36:31.228210   68248 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.234
	I0815 18:36:31.228245   68248 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:36:31.228267   68248 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:36:31.228317   68248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:31.275944   68248 cri.go:89] found id: ""
	I0815 18:36:31.276031   68248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:36:31.294466   68248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:36:31.307241   68248 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:36:31.307276   68248 kubeadm.go:157] found existing configuration files:
	
	I0815 18:36:31.307327   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:36:31.316654   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:36:31.316722   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:36:31.326475   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:36:31.335726   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:36:31.335796   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:36:31.345063   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:36:31.353576   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:36:31.353628   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:36:31.362449   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:36:31.370717   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:36:31.370792   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:36:31.379827   68248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:36:31.389001   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:31.510611   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.158537   68248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.647891555s)
	I0815 18:36:33.158574   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.376600   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.459742   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.545503   68248 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:36:33.545595   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.046191   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.546256   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.571236   68248 api_server.go:72] duration metric: took 1.025744612s to wait for apiserver process to appear ...
	I0815 18:36:34.571275   68248 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:36:34.571297   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:31.675513   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:31.676013   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:31.676042   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:31.675960   69513 retry.go:31] will retry after 1.669099573s: waiting for machine to come up
	I0815 18:36:33.348089   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:33.348611   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:33.348639   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:33.348575   69513 retry.go:31] will retry after 1.613394267s: waiting for machine to come up
	I0815 18:36:34.963674   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:34.964187   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:34.964215   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:34.964146   69513 retry.go:31] will retry after 2.128578928s: waiting for machine to come up
	I0815 18:36:37.262138   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:37.262168   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:37.262184   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:37.310539   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:37.310569   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:37.571713   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:37.590002   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:37.590062   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:38.071526   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:38.076179   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:38.076212   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:38.571714   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:38.576518   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I0815 18:36:38.582358   68248 api_server.go:141] control plane version: v1.31.0
	I0815 18:36:38.582381   68248 api_server.go:131] duration metric: took 4.011097638s to wait for apiserver health ...
	I0815 18:36:38.582393   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:36:38.582401   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:38.584203   68248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:36:38.585513   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:36:38.604350   68248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:36:38.645538   68248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:36:38.653445   68248 system_pods.go:59] 8 kube-system pods found
	I0815 18:36:38.653476   68248 system_pods.go:61] "coredns-6f6b679f8f-sjx7c" [93a037b9-1e7c-471a-b62d-d7898b2b8287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:36:38.653486   68248 system_pods.go:61] "etcd-embed-certs-555028" [7e526b10-7acd-4d25-9847-8e11e21ba8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:36:38.653495   68248 system_pods.go:61] "kube-apiserver-embed-certs-555028" [3f317b0f-15a1-4e7d-8ca5-3cdf70dee711] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:36:38.653501   68248 system_pods.go:61] "kube-controller-manager-embed-certs-555028" [431113cd-bce9-4ecb-8233-c5463875f1b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:36:38.653506   68248 system_pods.go:61] "kube-proxy-dzwt7" [a8101c7e-c010-45a3-8746-0dc20c7ef0e2] Running
	I0815 18:36:38.653513   68248 system_pods.go:61] "kube-scheduler-embed-certs-555028" [84a5d051-d8c1-4097-b92c-e2f0d7a03385] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:36:38.653520   68248 system_pods.go:61] "metrics-server-6867b74b74-wp5rn" [222160bf-6774-49a5-9f30-7582748c8a82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:36:38.653534   68248 system_pods.go:61] "storage-provisioner" [e88c8785-2d8b-47b6-850f-e6cda74a4f5a] Running
	I0815 18:36:38.653549   68248 system_pods.go:74] duration metric: took 7.98765ms to wait for pod list to return data ...
	I0815 18:36:38.653558   68248 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:36:38.656864   68248 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:36:38.656893   68248 node_conditions.go:123] node cpu capacity is 2
	I0815 18:36:38.656906   68248 node_conditions.go:105] duration metric: took 3.340245ms to run NodePressure ...
	I0815 18:36:38.656923   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:38.918518   68248 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:36:38.923148   68248 kubeadm.go:739] kubelet initialised
	I0815 18:36:38.923168   68248 kubeadm.go:740] duration metric: took 4.62305ms waiting for restarted kubelet to initialise ...
	I0815 18:36:38.923177   68248 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:38.927933   68248 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.934928   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.934953   68248 pod_ready.go:82] duration metric: took 6.994953ms for pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.934965   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.934974   68248 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.939533   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "etcd-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.939558   68248 pod_ready.go:82] duration metric: took 4.573835ms for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.939568   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "etcd-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.939575   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.943567   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.943590   68248 pod_ready.go:82] duration metric: took 4.004869ms for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.943601   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.943608   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.049176   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:39.049203   68248 pod_ready.go:82] duration metric: took 105.585473ms for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:39.049212   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:39.049219   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dzwt7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.449514   68248 pod_ready.go:93] pod "kube-proxy-dzwt7" in "kube-system" namespace has status "Ready":"True"
	I0815 18:36:39.449539   68248 pod_ready.go:82] duration metric: took 400.311062ms for pod "kube-proxy-dzwt7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.449548   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:37.094139   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:37.094640   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:37.094670   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:37.094583   69513 retry.go:31] will retry after 2.268267509s: waiting for machine to come up
	I0815 18:36:39.365595   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:39.365975   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:39.366007   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:39.365938   69513 retry.go:31] will retry after 3.286154075s: waiting for machine to come up
	I0815 18:36:44.301710   68713 start.go:364] duration metric: took 3m51.402501772s to acquireMachinesLock for "old-k8s-version-278865"
	I0815 18:36:44.301771   68713 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:44.301792   68713 fix.go:54] fixHost starting: 
	I0815 18:36:44.302227   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:44.302265   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:44.319819   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I0815 18:36:44.320335   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:44.320975   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:36:44.321003   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:44.321380   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:44.321572   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:36:44.321720   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetState
	I0815 18:36:44.323551   68713 fix.go:112] recreateIfNeeded on old-k8s-version-278865: state=Stopped err=<nil>
	I0815 18:36:44.323586   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	W0815 18:36:44.323748   68713 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:44.325761   68713 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-278865" ...
	I0815 18:36:41.456648   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:43.456917   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:42.653801   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.654221   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has current primary IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.654251   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Found IP for machine: 192.168.61.7
	I0815 18:36:42.654268   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Reserving static IP address...
	I0815 18:36:42.654714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-423062", mac: "52:54:00:83:9a:f2", ip: "192.168.61.7"} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.654759   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | skip adding static IP to network mk-default-k8s-diff-port-423062 - found existing host DHCP lease matching {name: "default-k8s-diff-port-423062", mac: "52:54:00:83:9a:f2", ip: "192.168.61.7"}
	I0815 18:36:42.654778   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Reserved static IP address: 192.168.61.7
	I0815 18:36:42.654798   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for SSH to be available...
	I0815 18:36:42.654815   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Getting to WaitForSSH function...
	I0815 18:36:42.657618   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.657968   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.657996   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.658093   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Using SSH client type: external
	I0815 18:36:42.658115   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa (-rw-------)
	I0815 18:36:42.658200   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:36:42.658223   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | About to run SSH command:
	I0815 18:36:42.658234   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | exit 0
	I0815 18:36:42.780714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | SSH cmd err, output: <nil>: 
	I0815 18:36:42.781095   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetConfigRaw
	I0815 18:36:42.781734   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:42.784384   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.784820   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.784853   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.785137   68429 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/config.json ...
	I0815 18:36:42.785364   68429 machine.go:93] provisionDockerMachine start ...
	I0815 18:36:42.785390   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:42.785599   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:42.788083   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.788436   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.788465   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.788655   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:42.788833   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.789006   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.789145   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:42.789301   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:42.789607   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:42.789625   68429 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:36:42.889002   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:36:42.889031   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:42.889237   68429 buildroot.go:166] provisioning hostname "default-k8s-diff-port-423062"
	I0815 18:36:42.889260   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:42.889434   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:42.892072   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.892422   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.892445   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.892645   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:42.892846   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.892995   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.893148   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:42.893286   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:42.893490   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:42.893505   68429 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-423062 && echo "default-k8s-diff-port-423062" | sudo tee /etc/hostname
	I0815 18:36:43.008310   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-423062
	
	I0815 18:36:43.008347   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.011091   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.011446   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.011472   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.011653   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.011864   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.012027   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.012159   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.012321   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:43.012518   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:43.012537   68429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-423062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-423062/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-423062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:36:43.121095   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:43.121123   68429 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:36:43.121157   68429 buildroot.go:174] setting up certificates
	I0815 18:36:43.121172   68429 provision.go:84] configureAuth start
	I0815 18:36:43.121186   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:43.121510   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:43.123863   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.124178   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.124200   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.124312   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.126385   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.126633   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.126667   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.126784   68429 provision.go:143] copyHostCerts
	I0815 18:36:43.126861   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:36:43.126884   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:36:43.126944   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:36:43.127052   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:36:43.127062   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:36:43.127090   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:36:43.127177   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:36:43.127187   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:36:43.127215   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:36:43.127286   68429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-423062 san=[127.0.0.1 192.168.61.7 default-k8s-diff-port-423062 localhost minikube]
	I0815 18:36:43.627396   68429 provision.go:177] copyRemoteCerts
	I0815 18:36:43.627460   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:36:43.627485   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.630025   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.630311   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.630340   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.630479   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.630670   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.630850   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.630976   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:43.712571   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:36:43.738904   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0815 18:36:43.764328   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 18:36:43.787211   68429 provision.go:87] duration metric: took 666.026026ms to configureAuth
	I0815 18:36:43.787241   68429 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:36:43.787467   68429 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:43.787567   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.789803   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.790210   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.790232   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.790432   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.790604   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.790729   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.790905   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.791021   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:43.791169   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:43.791187   68429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:36:44.067277   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:36:44.067307   68429 machine.go:96] duration metric: took 1.281926749s to provisionDockerMachine
	I0815 18:36:44.067322   68429 start.go:293] postStartSetup for "default-k8s-diff-port-423062" (driver="kvm2")
	I0815 18:36:44.067335   68429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:36:44.067360   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.067711   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:36:44.067749   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.070224   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.070543   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.070573   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.070740   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.070925   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.071079   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.071269   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.152784   68429 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:36:44.157264   68429 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:36:44.157291   68429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:36:44.157364   68429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:36:44.157461   68429 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:36:44.157580   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:36:44.168520   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:44.195223   68429 start.go:296] duration metric: took 127.886016ms for postStartSetup
	I0815 18:36:44.195268   68429 fix.go:56] duration metric: took 19.045962302s for fixHost
	I0815 18:36:44.195292   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.197711   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.198065   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.198090   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.198281   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.198438   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.198614   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.198768   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.198959   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:44.199154   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:44.199172   68429 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:36:44.301519   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747004.273982003
	
	I0815 18:36:44.301543   68429 fix.go:216] guest clock: 1723747004.273982003
	I0815 18:36:44.301553   68429 fix.go:229] Guest: 2024-08-15 18:36:44.273982003 +0000 UTC Remote: 2024-08-15 18:36:44.195273929 +0000 UTC m=+258.412094909 (delta=78.708074ms)
	I0815 18:36:44.301598   68429 fix.go:200] guest clock delta is within tolerance: 78.708074ms
	I0815 18:36:44.301606   68429 start.go:83] releasing machines lock for "default-k8s-diff-port-423062", held for 19.152336719s
	I0815 18:36:44.301638   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.301903   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:44.305012   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.305498   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.305524   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.305742   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306240   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306425   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306533   68429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:36:44.306595   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.306689   68429 ssh_runner.go:195] Run: cat /version.json
	I0815 18:36:44.306714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.309649   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.309838   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310098   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.310133   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310250   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.310267   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.310296   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310434   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.310457   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.310634   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.310654   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.310794   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.310798   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.310947   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.412125   68429 ssh_runner.go:195] Run: systemctl --version
	I0815 18:36:44.420070   68429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:36:44.566014   68429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:36:44.572209   68429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:36:44.572283   68429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:36:44.593041   68429 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:36:44.593067   68429 start.go:495] detecting cgroup driver to use...
	I0815 18:36:44.593145   68429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:36:44.613683   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:36:44.627766   68429 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:36:44.627851   68429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:36:44.641172   68429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:36:44.654952   68429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:36:44.778684   68429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:36:44.965548   68429 docker.go:233] disabling docker service ...
	I0815 18:36:44.965631   68429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:36:44.983153   68429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:36:44.999109   68429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:36:45.131097   68429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:36:45.270930   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:36:45.287846   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:36:45.309345   68429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:36:45.309407   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.320032   68429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:36:45.320092   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.331647   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.342499   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.353192   68429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:36:45.364163   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.381124   68429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.403692   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.415032   68429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:36:45.424798   68429 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:36:45.424859   68429 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:36:45.439077   68429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:36:45.448554   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:45.570697   68429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:36:45.719575   68429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:36:45.719655   68429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:36:45.724415   68429 start.go:563] Will wait 60s for crictl version
	I0815 18:36:45.724476   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:36:45.728443   68429 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:36:45.770935   68429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:36:45.771023   68429 ssh_runner.go:195] Run: crio --version
	I0815 18:36:45.799588   68429 ssh_runner.go:195] Run: crio --version
	I0815 18:36:45.830915   68429 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:36:44.327259   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .Start
	I0815 18:36:44.327431   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring networks are active...
	I0815 18:36:44.328116   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network default is active
	I0815 18:36:44.328601   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network mk-old-k8s-version-278865 is active
	I0815 18:36:44.329081   68713 main.go:141] libmachine: (old-k8s-version-278865) Getting domain xml...
	I0815 18:36:44.331888   68713 main.go:141] libmachine: (old-k8s-version-278865) Creating domain...
	I0815 18:36:45.633882   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting to get IP...
	I0815 18:36:45.634842   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.635216   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.635286   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.635206   69670 retry.go:31] will retry after 300.377534ms: waiting for machine to come up
	I0815 18:36:45.937793   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.938290   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.938312   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.938236   69670 retry.go:31] will retry after 282.311084ms: waiting for machine to come up
	I0815 18:36:46.222856   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.223327   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.223350   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.223283   69670 retry.go:31] will retry after 354.299649ms: waiting for machine to come up
	I0815 18:36:46.578770   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.579337   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.579360   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.579241   69670 retry.go:31] will retry after 382.947645ms: waiting for machine to come up
	I0815 18:36:46.964003   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.964911   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.964943   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.964824   69670 retry.go:31] will retry after 710.757442ms: waiting for machine to come up
	I0815 18:36:47.676738   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:47.677422   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:47.677450   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:47.677360   69670 retry.go:31] will retry after 588.944709ms: waiting for machine to come up
	I0815 18:36:45.957776   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:48.456345   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:45.832411   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:45.835145   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:45.835523   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:45.835553   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:45.835762   68429 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 18:36:45.840347   68429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:45.854348   68429 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-423062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:36:45.854471   68429 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:36:45.854527   68429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:45.899238   68429 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:36:45.899320   68429 ssh_runner.go:195] Run: which lz4
	I0815 18:36:45.903367   68429 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:36:45.907499   68429 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:36:45.907526   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:36:47.317850   68429 crio.go:462] duration metric: took 1.414524229s to copy over tarball
	I0815 18:36:47.317929   68429 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:36:49.443172   68429 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.125212316s)
	I0815 18:36:49.443206   68429 crio.go:469] duration metric: took 2.125324606s to extract the tarball
	I0815 18:36:49.443215   68429 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:36:49.483693   68429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:49.535588   68429 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:36:49.535617   68429 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:36:49.535627   68429 kubeadm.go:934] updating node { 192.168.61.7 8444 v1.31.0 crio true true} ...
	I0815 18:36:49.535753   68429 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-423062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:36:49.535843   68429 ssh_runner.go:195] Run: crio config
	I0815 18:36:49.587186   68429 cni.go:84] Creating CNI manager for ""
	I0815 18:36:49.587215   68429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:49.587232   68429 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:36:49.587257   68429 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.7 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-423062 NodeName:default-k8s-diff-port-423062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:36:49.587447   68429 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-423062"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:36:49.587520   68429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:36:49.598312   68429 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:36:49.598376   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:36:49.608382   68429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0815 18:36:49.624449   68429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:36:49.647224   68429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0815 18:36:49.664848   68429 ssh_runner.go:195] Run: grep 192.168.61.7	control-plane.minikube.internal$ /etc/hosts
	I0815 18:36:49.668582   68429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:49.680786   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:49.804940   68429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:49.826104   68429 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062 for IP: 192.168.61.7
	I0815 18:36:49.826130   68429 certs.go:194] generating shared ca certs ...
	I0815 18:36:49.826147   68429 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:49.826281   68429 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:36:49.826322   68429 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:36:49.826331   68429 certs.go:256] generating profile certs ...
	I0815 18:36:49.826403   68429 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.key
	I0815 18:36:49.826461   68429 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.key.534debab
	I0815 18:36:49.826528   68429 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.key
	I0815 18:36:49.826667   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:36:49.826713   68429 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:36:49.826725   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:36:49.826748   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:36:49.826777   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:36:49.826810   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:36:49.826868   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:49.827597   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:36:49.855678   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:36:49.891292   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:36:49.928612   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:36:49.961506   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 18:36:49.993955   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:36:50.019275   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:36:50.046773   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:36:50.074201   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:36:50.101491   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:36:50.125378   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:36:50.149974   68429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:36:50.166393   68429 ssh_runner.go:195] Run: openssl version
	I0815 18:36:50.172182   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:36:50.182755   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.187110   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.187155   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.192956   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:36:50.203680   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:36:50.214269   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.218876   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.218925   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.224463   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:36:50.234811   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:36:50.245585   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.250397   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.250446   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.256189   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:36:50.267342   68429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:36:50.272011   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:36:50.278217   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:36:50.284300   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:36:50.290402   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:36:50.296174   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:36:50.301957   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:36:50.307807   68429 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-423062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:36:50.307910   68429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:36:50.307973   68429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:50.359833   68429 cri.go:89] found id: ""
	I0815 18:36:50.359923   68429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:36:50.370306   68429 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:36:50.370324   68429 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:36:50.370379   68429 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:36:50.379585   68429 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:36:50.380510   68429 kubeconfig.go:125] found "default-k8s-diff-port-423062" server: "https://192.168.61.7:8444"
	I0815 18:36:50.384136   68429 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:36:50.393393   68429 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.7
	I0815 18:36:50.393428   68429 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:36:50.393441   68429 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:36:50.393494   68429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:50.428085   68429 cri.go:89] found id: ""
	I0815 18:36:50.428162   68429 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:36:50.444032   68429 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:36:50.454927   68429 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:36:50.454948   68429 kubeadm.go:157] found existing configuration files:
	
	I0815 18:36:50.455000   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0815 18:36:50.464733   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:36:50.464797   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:36:50.473973   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0815 18:36:50.482861   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:36:50.482910   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:36:50.492213   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0815 18:36:50.501173   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:36:50.501230   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:36:50.510299   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0815 18:36:50.519262   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:36:50.519308   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:36:50.528632   68429 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:36:50.537914   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:50.655230   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:48.268221   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:48.268790   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:48.268814   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:48.268736   69670 retry.go:31] will retry after 781.489196ms: waiting for machine to come up
	I0815 18:36:49.051824   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:49.052246   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:49.052277   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:49.052182   69670 retry.go:31] will retry after 1.393037007s: waiting for machine to come up
	I0815 18:36:50.446428   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:50.446860   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:50.446892   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:50.446800   69670 retry.go:31] will retry after 1.826779004s: waiting for machine to come up
	I0815 18:36:52.275716   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:52.276208   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:52.276231   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:52.276167   69670 retry.go:31] will retry after 1.746726312s: waiting for machine to come up
	I0815 18:36:50.458388   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:52.147996   68248 pod_ready.go:93] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:36:52.148026   68248 pod_ready.go:82] duration metric: took 12.698470185s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:52.148039   68248 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:54.153927   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:51.670903   68429 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.015612511s)
	I0815 18:36:51.670943   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:51.985806   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:52.069082   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:52.189200   68429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:36:52.189298   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:52.689767   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:53.189633   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:53.205099   68429 api_server.go:72] duration metric: took 1.015908263s to wait for apiserver process to appear ...
	I0815 18:36:53.205136   68429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:36:53.205162   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:53.205695   68429 api_server.go:269] stopped: https://192.168.61.7:8444/healthz: Get "https://192.168.61.7:8444/healthz": dial tcp 192.168.61.7:8444: connect: connection refused
	I0815 18:36:53.705285   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:55.721139   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:55.721177   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:55.721193   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:55.750790   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:55.750825   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:56.205675   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:56.212464   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:56.212509   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:56.705700   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:56.716232   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:56.716277   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:57.205663   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:57.211081   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0815 18:36:57.217736   68429 api_server.go:141] control plane version: v1.31.0
	I0815 18:36:57.217763   68429 api_server.go:131] duration metric: took 4.012620084s to wait for apiserver health ...
	I0815 18:36:57.217772   68429 cni.go:84] Creating CNI manager for ""
	I0815 18:36:57.217778   68429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:57.219455   68429 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:36:54.025067   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:54.025508   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:54.025535   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:54.025462   69670 retry.go:31] will retry after 2.693215306s: waiting for machine to come up
	I0815 18:36:56.721740   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:56.722139   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:56.722178   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:56.722070   69670 retry.go:31] will retry after 3.370623363s: waiting for machine to come up
	I0815 18:36:57.220672   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:36:57.241710   68429 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:36:57.262714   68429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:36:57.272766   68429 system_pods.go:59] 8 kube-system pods found
	I0815 18:36:57.272822   68429 system_pods.go:61] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:36:57.272836   68429 system_pods.go:61] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:36:57.272849   68429 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:36:57.272862   68429 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:36:57.272872   68429 system_pods.go:61] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:36:57.272887   68429 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:36:57.272896   68429 system_pods.go:61] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:36:57.272902   68429 system_pods.go:61] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:36:57.272913   68429 system_pods.go:74] duration metric: took 10.175415ms to wait for pod list to return data ...
	I0815 18:36:57.272924   68429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:36:57.276880   68429 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:36:57.276915   68429 node_conditions.go:123] node cpu capacity is 2
	I0815 18:36:57.276929   68429 node_conditions.go:105] duration metric: took 3.998879ms to run NodePressure ...
	I0815 18:36:57.276951   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:57.554251   68429 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:36:57.558062   68429 kubeadm.go:739] kubelet initialised
	I0815 18:36:57.558084   68429 kubeadm.go:740] duration metric: took 3.811943ms waiting for restarted kubelet to initialise ...
	I0815 18:36:57.558091   68429 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:57.562470   68429 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.567212   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.567232   68429 pod_ready.go:82] duration metric: took 4.742538ms for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.567240   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.567245   68429 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.571217   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.571237   68429 pod_ready.go:82] duration metric: took 3.984908ms for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.571247   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.571255   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.575456   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.575494   68429 pod_ready.go:82] duration metric: took 4.232215ms for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.575507   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.575515   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.665876   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.665902   68429 pod_ready.go:82] duration metric: took 90.37918ms for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.665914   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.665921   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.066377   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-proxy-bnxv7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.066402   68429 pod_ready.go:82] duration metric: took 400.475025ms for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.066411   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-proxy-bnxv7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.066426   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.465739   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.465767   68429 pod_ready.go:82] duration metric: took 399.331024ms for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.465779   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.465787   68429 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.866772   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.866798   68429 pod_ready.go:82] duration metric: took 401.001046ms for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.866809   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.866817   68429 pod_ready.go:39] duration metric: took 1.308717049s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:58.866835   68429 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:36:58.878274   68429 ops.go:34] apiserver oom_adj: -16
	I0815 18:36:58.878298   68429 kubeadm.go:597] duration metric: took 8.507965813s to restartPrimaryControlPlane
	I0815 18:36:58.878308   68429 kubeadm.go:394] duration metric: took 8.570508558s to StartCluster
	I0815 18:36:58.878327   68429 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:58.878499   68429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:36:58.879927   68429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:58.880213   68429 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:36:58.880262   68429 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:36:58.880339   68429 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-423062"
	I0815 18:36:58.880375   68429 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-423062"
	I0815 18:36:58.880374   68429 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-423062"
	W0815 18:36:58.880383   68429 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:36:58.880367   68429 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-423062"
	I0815 18:36:58.880403   68429 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-423062"
	W0815 18:36:58.880410   68429 addons.go:243] addon metrics-server should already be in state true
	I0815 18:36:58.880414   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.880422   68429 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:58.880428   68429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-423062"
	I0815 18:36:58.880434   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.880772   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880778   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880801   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.880820   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.880826   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880855   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.882047   68429 out.go:177] * Verifying Kubernetes components...
	I0815 18:36:58.883440   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:58.895575   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I0815 18:36:58.895577   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37567
	I0815 18:36:58.895739   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39491
	I0815 18:36:58.896031   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896063   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896121   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896511   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896529   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896612   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896631   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896749   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896768   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896917   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.896963   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.897099   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.897132   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.897483   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.897527   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.897535   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.897558   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.900773   68429 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-423062"
	W0815 18:36:58.900796   68429 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:36:58.900825   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.901206   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.901238   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.912877   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0815 18:36:58.912903   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37245
	I0815 18:36:58.913271   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.913344   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.913835   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.913845   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.913852   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.913862   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.914177   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.914218   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.914361   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.914408   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.916165   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.916601   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.918553   68429 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:36:58.918560   68429 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:36:56.154697   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:58.654414   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:58.919539   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44177
	I0815 18:36:58.919773   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:36:58.919790   68429 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:36:58.919809   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.919884   68429 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:36:58.919900   68429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:36:58.919916   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.919945   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.920330   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.920343   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.920777   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.921363   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.921401   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.923262   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.923629   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.923656   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.923684   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.924108   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.924256   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.924319   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.924337   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.924501   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.924564   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.924688   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:58.924773   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.924944   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.925266   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:58.938064   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38697
	I0815 18:36:58.938411   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.938762   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.938782   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.939057   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.939214   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.941134   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.941395   68429 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:36:58.941414   68429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:36:58.941436   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.943936   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.944331   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.944355   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.944594   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.944765   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.944900   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.944977   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:59.069466   68429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:59.090259   68429 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-423062" to be "Ready" ...
	I0815 18:36:59.203591   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:36:59.232676   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:36:59.232705   68429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:36:59.273079   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:36:59.287625   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:36:59.287653   68429 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:36:59.359798   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:36:59.359821   68429 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:36:59.406350   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:00.373429   68429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16980511s)
	I0815 18:37:00.373477   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373495   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373501   68429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.10037967s)
	I0815 18:37:00.373546   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373563   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373787   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.373805   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.373848   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.373852   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.373863   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.373866   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.373890   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373903   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373879   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373937   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.374313   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.374322   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.374326   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.374344   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.374355   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.379434   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.379450   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.379666   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.379679   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.389853   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.389872   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.390148   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.390152   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.390173   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.390181   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.390189   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.390396   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.390447   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.390461   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.390475   68429 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-423062"
	I0815 18:37:00.392530   68429 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 18:37:00.393703   68429 addons.go:510] duration metric: took 1.51344438s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 18:37:00.093896   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:00.094391   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:37:00.094453   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:37:00.094333   69670 retry.go:31] will retry after 2.855023319s: waiting for machine to come up
	I0815 18:37:04.297557   67936 start.go:364] duration metric: took 52.755115386s to acquireMachinesLock for "no-preload-599042"
	I0815 18:37:04.297614   67936 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:37:04.297639   67936 fix.go:54] fixHost starting: 
	I0815 18:37:04.298066   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:04.298096   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:04.317897   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42493
	I0815 18:37:04.318309   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:04.318797   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:04.318822   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:04.319191   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:04.319388   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:04.319543   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:04.320970   67936 fix.go:112] recreateIfNeeded on no-preload-599042: state=Stopped err=<nil>
	I0815 18:37:04.320994   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	W0815 18:37:04.321164   67936 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:37:04.322689   67936 out.go:177] * Restarting existing kvm2 VM for "no-preload-599042" ...
	I0815 18:37:00.654833   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:03.154235   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:02.950449   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950903   68713 main.go:141] libmachine: (old-k8s-version-278865) Found IP for machine: 192.168.39.89
	I0815 18:37:02.950931   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has current primary IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950941   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserving static IP address...
	I0815 18:37:02.951319   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.951356   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | skip adding static IP to network mk-old-k8s-version-278865 - found existing host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"}
	I0815 18:37:02.951376   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserved static IP address: 192.168.39.89
	I0815 18:37:02.951393   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting for SSH to be available...
	I0815 18:37:02.951424   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Getting to WaitForSSH function...
	I0815 18:37:02.953498   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.953804   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953927   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH client type: external
	I0815 18:37:02.953957   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa (-rw-------)
	I0815 18:37:02.953989   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:37:02.954001   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | About to run SSH command:
	I0815 18:37:02.954009   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | exit 0
	I0815 18:37:03.076431   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | SSH cmd err, output: <nil>: 
	I0815 18:37:03.076748   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetConfigRaw
	I0815 18:37:03.077325   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.079733   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080100   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.080132   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080332   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:37:03.080537   68713 machine.go:93] provisionDockerMachine start ...
	I0815 18:37:03.080554   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:03.080717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.082778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083140   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.083168   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083331   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.083482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083612   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083730   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.083881   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.084067   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.084078   68713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:37:03.188779   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:37:03.188813   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189045   68713 buildroot.go:166] provisioning hostname "old-k8s-version-278865"
	I0815 18:37:03.189069   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189284   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.191858   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192171   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.192192   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192328   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.192533   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192676   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192822   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.193015   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.193180   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.193192   68713 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-278865 && echo "old-k8s-version-278865" | sudo tee /etc/hostname
	I0815 18:37:03.313099   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-278865
	
	I0815 18:37:03.313129   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.315840   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316196   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.316226   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316378   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.316608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316760   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316885   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.317001   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.317184   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.317207   68713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-278865' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-278865/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-278865' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:37:03.429897   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:37:03.429934   68713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:37:03.429962   68713 buildroot.go:174] setting up certificates
	I0815 18:37:03.429972   68713 provision.go:84] configureAuth start
	I0815 18:37:03.429983   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.430274   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.432724   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433053   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.433083   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433212   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.435181   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435514   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.435543   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435657   68713 provision.go:143] copyHostCerts
	I0815 18:37:03.435715   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:37:03.435736   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:37:03.435804   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:37:03.435919   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:37:03.435929   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:37:03.435959   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:37:03.436045   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:37:03.436055   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:37:03.436082   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:37:03.436170   68713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-278865 san=[127.0.0.1 192.168.39.89 localhost minikube old-k8s-version-278865]
	I0815 18:37:03.604924   68713 provision.go:177] copyRemoteCerts
	I0815 18:37:03.604979   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:37:03.605003   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.607328   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607616   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.607634   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607821   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.608016   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.608171   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.608429   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:03.690560   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:37:03.714632   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 18:37:03.737805   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:37:03.762338   68713 provision.go:87] duration metric: took 332.353741ms to configureAuth
	I0815 18:37:03.762371   68713 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:37:03.762543   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:37:03.762608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.765626   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.765988   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.766018   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.766211   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.766380   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766574   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766712   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.766897   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.767053   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.767069   68713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:37:04.050635   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:37:04.050663   68713 machine.go:96] duration metric: took 970.113556ms to provisionDockerMachine
	I0815 18:37:04.050674   68713 start.go:293] postStartSetup for "old-k8s-version-278865" (driver="kvm2")
	I0815 18:37:04.050685   68713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:37:04.050717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.051048   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:37:04.051081   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.053709   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054095   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.054124   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054432   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.054622   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.054774   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.054914   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.139381   68713 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:37:04.145097   68713 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:37:04.145124   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:37:04.145201   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:37:04.145298   68713 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:37:04.145421   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:37:04.156166   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:04.181562   68713 start.go:296] duration metric: took 130.872499ms for postStartSetup
	I0815 18:37:04.181605   68713 fix.go:56] duration metric: took 19.879821037s for fixHost
	I0815 18:37:04.181629   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.184268   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184652   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.184682   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184917   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.185151   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185345   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185502   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.185677   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:04.185925   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:04.185938   68713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:37:04.297391   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747024.271483326
	
	I0815 18:37:04.297413   68713 fix.go:216] guest clock: 1723747024.271483326
	I0815 18:37:04.297423   68713 fix.go:229] Guest: 2024-08-15 18:37:04.271483326 +0000 UTC Remote: 2024-08-15 18:37:04.181610291 +0000 UTC m=+251.426055371 (delta=89.873035ms)
	I0815 18:37:04.297448   68713 fix.go:200] guest clock delta is within tolerance: 89.873035ms
	I0815 18:37:04.297455   68713 start.go:83] releasing machines lock for "old-k8s-version-278865", held for 19.99571173s
	I0815 18:37:04.297504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.297818   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:04.300970   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301425   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.301455   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301609   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302194   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302404   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302495   68713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:37:04.302545   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.302679   68713 ssh_runner.go:195] Run: cat /version.json
	I0815 18:37:04.302705   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.305673   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.305903   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306066   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306092   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306273   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306301   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306337   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306537   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306657   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306664   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306827   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306834   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.307009   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.409319   68713 ssh_runner.go:195] Run: systemctl --version
	I0815 18:37:04.415576   68713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:37:04.565772   68713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:37:04.571909   68713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:37:04.571996   68713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:37:04.588400   68713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:37:04.588427   68713 start.go:495] detecting cgroup driver to use...
	I0815 18:37:04.588528   68713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:37:04.604253   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:37:04.619003   68713 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:37:04.619051   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:37:04.632530   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:37:04.646080   68713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:37:04.763855   68713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:37:04.922470   68713 docker.go:233] disabling docker service ...
	I0815 18:37:04.922566   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:37:04.937301   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:37:04.950721   68713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:37:05.079767   68713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:37:05.210207   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:37:05.225569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:37:05.247998   68713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 18:37:05.248070   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.262851   68713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:37:05.262924   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.274489   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.285901   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.298749   68713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:37:05.310052   68713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:37:05.320992   68713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:37:05.321073   68713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:37:05.340323   68713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:37:05.354069   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:05.483573   68713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:37:05.647020   68713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:37:05.647094   68713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:37:05.653850   68713 start.go:563] Will wait 60s for crictl version
	I0815 18:37:05.653924   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:05.658476   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:37:05.697818   68713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:37:05.697907   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.724931   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.755831   68713 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 18:37:01.094934   68429 node_ready.go:53] node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:37:03.594364   68429 node_ready.go:53] node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:37:05.756950   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:05.759791   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760188   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:05.760220   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760468   68713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:37:05.764753   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:05.777462   68713 kubeadm.go:883] updating cluster {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:37:05.777614   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:37:05.777679   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:05.848895   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:05.848967   68713 ssh_runner.go:195] Run: which lz4
	I0815 18:37:05.853103   68713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:37:05.858012   68713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:37:05.858046   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 18:37:07.520567   68713 crio.go:462] duration metric: took 1.667489785s to copy over tarball
	I0815 18:37:07.520642   68713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:37:04.324093   67936 main.go:141] libmachine: (no-preload-599042) Calling .Start
	I0815 18:37:04.324263   67936 main.go:141] libmachine: (no-preload-599042) Ensuring networks are active...
	I0815 18:37:04.325099   67936 main.go:141] libmachine: (no-preload-599042) Ensuring network default is active
	I0815 18:37:04.325778   67936 main.go:141] libmachine: (no-preload-599042) Ensuring network mk-no-preload-599042 is active
	I0815 18:37:04.326007   67936 main.go:141] libmachine: (no-preload-599042) Getting domain xml...
	I0815 18:37:04.328184   67936 main.go:141] libmachine: (no-preload-599042) Creating domain...
	I0815 18:37:05.626206   67936 main.go:141] libmachine: (no-preload-599042) Waiting to get IP...
	I0815 18:37:05.627374   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:05.627877   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:05.627935   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:05.627844   69876 retry.go:31] will retry after 199.774188ms: waiting for machine to come up
	I0815 18:37:05.829673   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:05.830213   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:05.830240   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:05.830170   69876 retry.go:31] will retry after 255.850483ms: waiting for machine to come up
	I0815 18:37:06.087766   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:06.088378   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:06.088405   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:06.088330   69876 retry.go:31] will retry after 351.231421ms: waiting for machine to come up
	I0815 18:37:06.440937   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:06.441597   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:06.441626   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:06.441572   69876 retry.go:31] will retry after 602.620924ms: waiting for machine to come up
	I0815 18:37:07.046269   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:07.046745   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:07.046769   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:07.046712   69876 retry.go:31] will retry after 578.450642ms: waiting for machine to come up
	I0815 18:37:07.627330   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:07.627832   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:07.627859   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:07.627791   69876 retry.go:31] will retry after 731.331176ms: waiting for machine to come up
	I0815 18:37:08.361310   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:08.361746   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:08.361776   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:08.361706   69876 retry.go:31] will retry after 1.089237688s: waiting for machine to come up
	I0815 18:37:05.157378   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:07.162990   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:09.654672   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:06.093822   68429 node_ready.go:49] node "default-k8s-diff-port-423062" has status "Ready":"True"
	I0815 18:37:06.093853   68429 node_ready.go:38] duration metric: took 7.003558244s for node "default-k8s-diff-port-423062" to be "Ready" ...
	I0815 18:37:06.093867   68429 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:06.103462   68429 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.111214   68429 pod_ready.go:93] pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:06.111235   68429 pod_ready.go:82] duration metric: took 7.746382ms for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.111244   68429 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.117713   68429 pod_ready.go:93] pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:06.117739   68429 pod_ready.go:82] duration metric: took 6.487608ms for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.117750   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:08.126216   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:10.128095   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:10.534169   68713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.013498464s)
	I0815 18:37:10.534194   68713 crio.go:469] duration metric: took 3.013602868s to extract the tarball
	I0815 18:37:10.534201   68713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:37:10.578998   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:10.619043   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:10.619146   68713 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:37:10.619246   68713 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.619247   68713 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.619278   68713 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 18:37:10.619275   68713 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.619291   68713 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.619304   68713 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.619322   68713 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.619405   68713 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621367   68713 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.621384   68713 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 18:37:10.621468   68713 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.621500   68713 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.621596   68713 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.621646   68713 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621706   68713 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.621897   68713 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.798617   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.828530   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 18:37:10.859528   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.918714   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.977028   68713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 18:37:10.977073   68713 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.977119   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:10.980573   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.985503   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.990642   68713 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 18:37:10.990684   68713 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 18:37:10.990733   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.000388   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.007526   68713 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 18:37:11.007589   68713 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.007642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.008543   68713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 18:37:11.008581   68713 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.008621   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.008642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077224   68713 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 18:37:11.077269   68713 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077228   68713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 18:37:11.077347   68713 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.077371   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111299   68713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 18:37:11.111376   68713 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.111387   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.111421   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111471   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.156942   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.156944   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.156997   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.263355   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.263448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.263455   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.263544   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.291407   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.312626   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.334606   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.427937   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.433739   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:11.435371   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.439448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.439541   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 18:37:11.450901   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.477906   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.520009   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 18:37:11.572349   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 18:37:11.686243   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 18:37:11.686295   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 18:37:11.686325   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 18:37:11.686378   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 18:37:11.686420   68713 cache_images.go:92] duration metric: took 1.067250234s to LoadCachedImages
	W0815 18:37:11.686494   68713 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0815 18:37:11.686508   68713 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.20.0 crio true true} ...
	I0815 18:37:11.686620   68713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-278865 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:37:11.686693   68713 ssh_runner.go:195] Run: crio config
	I0815 18:37:11.736781   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:37:11.736808   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:11.736824   68713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:37:11.736851   68713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-278865 NodeName:old-k8s-version-278865 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 18:37:11.737039   68713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-278865"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:37:11.737120   68713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 18:37:11.747511   68713 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:37:11.747585   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:37:11.757850   68713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 18:37:11.775982   68713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:37:11.792938   68713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 18:37:11.811576   68713 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I0815 18:37:11.815708   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:11.829992   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:11.983884   68713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:12.002603   68713 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865 for IP: 192.168.39.89
	I0815 18:37:12.002632   68713 certs.go:194] generating shared ca certs ...
	I0815 18:37:12.002682   68713 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.002867   68713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:37:12.002926   68713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:37:12.002942   68713 certs.go:256] generating profile certs ...
	I0815 18:37:12.025160   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.key
	I0815 18:37:12.025296   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a
	I0815 18:37:12.025351   68713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key
	I0815 18:37:12.025516   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:37:12.025578   68713 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:37:12.025591   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:37:12.025627   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:37:12.025661   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:37:12.025691   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:37:12.025746   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:12.026614   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:37:12.066771   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:37:12.109649   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:37:12.176744   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:37:12.207990   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 18:37:12.244999   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:37:12.282338   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:37:12.308761   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:37:12.332316   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:37:12.355977   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:37:12.379169   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:37:12.405472   68713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:37:12.424110   68713 ssh_runner.go:195] Run: openssl version
	I0815 18:37:12.430231   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:37:12.441531   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.445971   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.446061   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.452134   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:37:12.466809   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:37:12.478211   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482659   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482708   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.490225   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:37:12.504908   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:37:12.516825   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521854   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521911   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.527884   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:37:12.539398   68713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:37:12.544010   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:37:12.549918   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:37:12.555714   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:37:12.561895   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:37:12.567736   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:37:12.573664   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:37:12.579510   68713 kubeadm.go:392] StartCluster: {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:37:12.579627   68713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:37:12.579688   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.621503   68713 cri.go:89] found id: ""
	I0815 18:37:12.621576   68713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:37:12.632722   68713 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:37:12.632746   68713 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:37:12.632796   68713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:37:12.643192   68713 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:37:12.644607   68713 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-278865" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:37:12.645629   68713 kubeconfig.go:62] /home/jenkins/minikube-integration/19450-13013/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-278865" cluster setting kubeconfig missing "old-k8s-version-278865" context setting]
	I0815 18:37:12.647073   68713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.653052   68713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:37:12.665777   68713 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.89
	I0815 18:37:12.665808   68713 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:37:12.665821   68713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:37:12.665872   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.713574   68713 cri.go:89] found id: ""
	I0815 18:37:12.713641   68713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:37:12.731459   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:37:12.741769   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:37:12.741789   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:37:12.741833   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:37:12.750990   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:37:12.751049   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:37:12.761621   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:37:12.771204   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:37:12.771261   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:37:12.782012   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:37:09.452971   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:09.453451   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:09.453494   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:09.453393   69876 retry.go:31] will retry after 1.35461204s: waiting for machine to come up
	I0815 18:37:10.809664   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:10.810127   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:10.810158   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:10.810065   69876 retry.go:31] will retry after 1.709820883s: waiting for machine to come up
	I0815 18:37:12.521458   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:12.521988   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:12.522016   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:12.521930   69876 retry.go:31] will retry after 1.401971708s: waiting for machine to come up
	I0815 18:37:13.925401   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:13.925868   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:13.925898   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:13.925824   69876 retry.go:31] will retry after 2.768002946s: waiting for machine to come up
	I0815 18:37:11.655451   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:14.154561   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:12.400960   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:13.128357   68429 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.128379   68429 pod_ready.go:82] duration metric: took 7.010621879s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.128389   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.136617   68429 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.136638   68429 pod_ready.go:82] duration metric: took 8.242471ms for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.136648   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.143530   68429 pod_ready.go:93] pod "kube-proxy-bnxv7" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.143551   68429 pod_ready.go:82] duration metric: took 6.895931ms for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.143563   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.151691   68429 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.151721   68429 pod_ready.go:82] duration metric: took 8.149821ms for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.151735   68429 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:15.158172   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:12.791928   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:37:12.791994   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:37:12.801858   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:37:12.811023   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:37:12.811083   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:37:12.822189   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:37:12.834293   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:12.974325   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.452192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.690442   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.798270   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.900783   68713 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:37:13.900877   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.401954   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.901809   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.401755   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.901010   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.401794   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.901149   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:17.401599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.694999   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:16.695488   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:16.695506   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:16.695430   69876 retry.go:31] will retry after 2.308386075s: waiting for machine to come up
	I0815 18:37:16.154692   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:18.653763   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:17.159197   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:19.159442   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:17.901511   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.401720   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.900976   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.401223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.901522   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.901573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:22.401279   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.005581   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:19.005979   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:19.006008   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:19.005930   69876 retry.go:31] will retry after 2.758801207s: waiting for machine to come up
	I0815 18:37:21.766860   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.767286   67936 main.go:141] libmachine: (no-preload-599042) Found IP for machine: 192.168.72.14
	I0815 18:37:21.767303   67936 main.go:141] libmachine: (no-preload-599042) Reserving static IP address...
	I0815 18:37:21.767314   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has current primary IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.767722   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "no-preload-599042", mac: "52:54:00:d1:54:6d", ip: "192.168.72.14"} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.767745   67936 main.go:141] libmachine: (no-preload-599042) Reserved static IP address: 192.168.72.14
	I0815 18:37:21.767757   67936 main.go:141] libmachine: (no-preload-599042) DBG | skip adding static IP to network mk-no-preload-599042 - found existing host DHCP lease matching {name: "no-preload-599042", mac: "52:54:00:d1:54:6d", ip: "192.168.72.14"}
	I0815 18:37:21.767768   67936 main.go:141] libmachine: (no-preload-599042) DBG | Getting to WaitForSSH function...
	I0815 18:37:21.767780   67936 main.go:141] libmachine: (no-preload-599042) Waiting for SSH to be available...
	I0815 18:37:21.769674   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.769950   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.769973   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.770072   67936 main.go:141] libmachine: (no-preload-599042) DBG | Using SSH client type: external
	I0815 18:37:21.770103   67936 main.go:141] libmachine: (no-preload-599042) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa (-rw-------)
	I0815 18:37:21.770134   67936 main.go:141] libmachine: (no-preload-599042) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:37:21.770147   67936 main.go:141] libmachine: (no-preload-599042) DBG | About to run SSH command:
	I0815 18:37:21.770162   67936 main.go:141] libmachine: (no-preload-599042) DBG | exit 0
	I0815 18:37:21.888536   67936 main.go:141] libmachine: (no-preload-599042) DBG | SSH cmd err, output: <nil>: 
	I0815 18:37:21.888900   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetConfigRaw
	I0815 18:37:21.889541   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:21.892351   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.892730   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.892760   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.892976   67936 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/config.json ...
	I0815 18:37:21.893181   67936 machine.go:93] provisionDockerMachine start ...
	I0815 18:37:21.893203   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:21.893404   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:21.895471   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.895774   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.895812   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.895967   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:21.896153   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.896334   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.896522   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:21.896697   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:21.896872   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:21.896884   67936 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:37:21.992598   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:37:21.992622   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:21.992856   67936 buildroot.go:166] provisioning hostname "no-preload-599042"
	I0815 18:37:21.992884   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:21.993095   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:21.995586   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.995902   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.995930   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.996051   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:21.996239   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.996375   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.996538   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:21.996691   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:21.996869   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:21.996884   67936 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-599042 && echo "no-preload-599042" | sudo tee /etc/hostname
	I0815 18:37:22.106513   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-599042
	
	I0815 18:37:22.106553   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.109655   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.110111   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.110143   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.110362   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.110548   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.110718   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.110838   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.110970   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.111141   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.111162   67936 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-599042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-599042/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-599042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:37:22.221858   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:37:22.221898   67936 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:37:22.221924   67936 buildroot.go:174] setting up certificates
	I0815 18:37:22.221938   67936 provision.go:84] configureAuth start
	I0815 18:37:22.221956   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:22.222278   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:22.225058   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.225374   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.225410   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.225544   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.227539   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.227885   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.227929   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.228052   67936 provision.go:143] copyHostCerts
	I0815 18:37:22.228111   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:37:22.228126   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:37:22.228190   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:37:22.228273   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:37:22.228282   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:37:22.228301   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:37:22.228352   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:37:22.228359   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:37:22.228375   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:37:22.228428   67936 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.no-preload-599042 san=[127.0.0.1 192.168.72.14 localhost minikube no-preload-599042]
	I0815 18:37:22.383520   67936 provision.go:177] copyRemoteCerts
	I0815 18:37:22.383578   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:37:22.383601   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.386048   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.386303   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.386338   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.386566   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.386722   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.386894   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.387036   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:22.470828   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:37:22.494929   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:37:22.519545   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 18:37:22.544417   67936 provision.go:87] duration metric: took 322.465732ms to configureAuth
	I0815 18:37:22.544442   67936 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:37:22.544661   67936 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:37:22.544736   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.547284   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.547610   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.547641   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.547876   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.548076   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.548271   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.548413   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.548594   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.548795   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.548818   67936 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:37:22.803896   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:37:22.803924   67936 machine.go:96] duration metric: took 910.728961ms to provisionDockerMachine
	I0815 18:37:22.803935   67936 start.go:293] postStartSetup for "no-preload-599042" (driver="kvm2")
	I0815 18:37:22.803945   67936 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:37:22.803959   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:22.804274   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:37:22.804322   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.807041   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.807437   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.807467   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.807570   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.807747   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.807906   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.808002   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:22.887667   67936 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:37:22.892368   67936 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:37:22.892393   67936 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:37:22.892480   67936 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:37:22.892588   67936 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:37:22.892681   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:37:22.901987   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:22.927782   67936 start.go:296] duration metric: took 123.834401ms for postStartSetup
	I0815 18:37:22.927823   67936 fix.go:56] duration metric: took 18.630196933s for fixHost
	I0815 18:37:22.927848   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.930378   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.930728   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.930755   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.930868   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.931043   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.931226   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.931386   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.931538   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.931705   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.931718   67936 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:37:23.029393   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747042.997661196
	
	I0815 18:37:23.029423   67936 fix.go:216] guest clock: 1723747042.997661196
	I0815 18:37:23.029433   67936 fix.go:229] Guest: 2024-08-15 18:37:22.997661196 +0000 UTC Remote: 2024-08-15 18:37:22.927828036 +0000 UTC m=+353.975665928 (delta=69.83316ms)
	I0815 18:37:23.029455   67936 fix.go:200] guest clock delta is within tolerance: 69.83316ms
	I0815 18:37:23.029465   67936 start.go:83] releasing machines lock for "no-preload-599042", held for 18.731874864s
	I0815 18:37:23.029491   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.029730   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:23.031885   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.032242   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.032261   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.032449   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.032908   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.033062   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.033149   67936 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:37:23.033197   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:23.033303   67936 ssh_runner.go:195] Run: cat /version.json
	I0815 18:37:23.033322   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:23.035943   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.035987   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036327   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.036433   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.036463   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036482   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036657   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:23.036836   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:23.036855   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:23.036966   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:23.037039   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:23.037119   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:23.037183   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:23.037242   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:23.117399   67936 ssh_runner.go:195] Run: systemctl --version
	I0815 18:37:23.138614   67936 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:37:23.287862   67936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:37:23.293943   67936 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:37:23.294013   67936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:37:23.310957   67936 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:37:23.310987   67936 start.go:495] detecting cgroup driver to use...
	I0815 18:37:23.311067   67936 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:37:23.326641   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:37:23.340650   67936 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:37:23.340708   67936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:37:23.355401   67936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:37:23.369033   67936 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:37:23.480891   67936 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:37:23.629690   67936 docker.go:233] disabling docker service ...
	I0815 18:37:23.629782   67936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:37:23.644372   67936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:37:23.658312   67936 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:37:23.779999   67936 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:37:23.902630   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:37:23.917453   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:37:23.935696   67936 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:37:23.935749   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.946031   67936 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:37:23.946106   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.956639   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.967148   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.978049   67936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:37:23.989000   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.999290   67936 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:24.017002   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:24.027432   67936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:37:24.036714   67936 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:37:24.036770   67936 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:37:24.048956   67936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:37:24.058269   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:24.173548   67936 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:37:24.316383   67936 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:37:24.316462   67936 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:37:24.321726   67936 start.go:563] Will wait 60s for crictl version
	I0815 18:37:24.321803   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.325718   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:37:24.362995   67936 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:37:24.363099   67936 ssh_runner.go:195] Run: crio --version
	I0815 18:37:24.392678   67936 ssh_runner.go:195] Run: crio --version
	I0815 18:37:24.424128   67936 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:37:20.654186   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:23.154893   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:21.658499   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:24.159865   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:22.901608   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.401519   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.901287   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.401831   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.901547   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.401220   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.901109   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.401441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.901515   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:27.401258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.425451   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:24.428263   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:24.428631   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:24.428656   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:24.428833   67936 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 18:37:24.433343   67936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:24.446011   67936 kubeadm.go:883] updating cluster {Name:no-preload-599042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:37:24.446123   67936 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:37:24.446168   67936 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:24.484321   67936 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:37:24.484346   67936 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:37:24.484417   67936 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:24.484429   67936 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.484444   67936 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.484470   67936 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.484472   67936 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.484581   67936 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.484583   67936 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 18:37:24.484585   67936 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.485836   67936 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.485844   67936 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 18:37:24.485852   67936 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.485846   67936 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.485836   67936 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.485837   67936 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.485846   67936 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:24.485906   67936 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.646217   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.653405   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.658441   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.662835   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.662858   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.681979   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.715361   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0815 18:37:24.722352   67936 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0815 18:37:24.722391   67936 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.722450   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.787439   67936 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0815 18:37:24.787486   67936 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.787530   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.810570   67936 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0815 18:37:24.810606   67936 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0815 18:37:24.810612   67936 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.810630   67936 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.810666   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.810667   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.841566   67936 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0815 18:37:24.841617   67936 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.841669   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.841698   67936 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0815 18:37:24.841743   67936 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.841800   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.950875   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.950918   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.950933   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.950989   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.951004   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.951052   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.079551   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:25.079601   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:25.079634   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:25.084852   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:25.084874   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.084910   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:25.216095   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:25.216235   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:25.216308   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:25.216384   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:25.216400   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.216431   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:25.336055   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 18:37:25.336126   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 18:37:25.336180   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.336222   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:25.336181   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 18:37:25.336320   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:25.352527   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 18:37:25.352566   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 18:37:25.352592   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0815 18:37:25.352639   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:25.352650   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:25.352702   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:25.355747   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0815 18:37:25.355764   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.355769   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0815 18:37:25.355797   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.355806   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0815 18:37:25.363222   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0815 18:37:25.363257   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0815 18:37:25.363435   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0815 18:37:25.476601   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:28.142118   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.786287506s)
	I0815 18:37:28.142134   67936 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.665496935s)
	I0815 18:37:28.142146   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0815 18:37:28.142177   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:28.142190   67936 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0815 18:37:28.142220   67936 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:28.142244   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:28.142259   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:25.155516   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:27.156071   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:29.157389   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:26.658491   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:28.659080   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:27.901777   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.401103   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.901746   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.401521   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.901691   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.401326   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.901672   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.401534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.901013   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:32.401696   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.598348   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.456076001s)
	I0815 18:37:29.598380   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0815 18:37:29.598404   67936 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:29.598407   67936 ssh_runner.go:235] Completed: which crictl: (1.456124508s)
	I0815 18:37:29.598451   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:29.598474   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:31.495864   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.897383444s)
	I0815 18:37:31.495897   67936 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.897403663s)
	I0815 18:37:31.495902   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0815 18:37:31.495931   67936 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:31.495968   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:31.495968   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:31.657799   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:34.156377   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:31.158308   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:33.159177   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:35.668218   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:32.901441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.901095   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.401705   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.901020   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.401019   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.901094   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.400952   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.901717   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:37.401701   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.526372   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.030374686s)
	I0815 18:37:35.526410   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0815 18:37:35.526422   67936 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.030343547s)
	I0815 18:37:35.526438   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:35.526482   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:35.526483   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:35.570806   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 18:37:35.570906   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:37.500059   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.973499408s)
	I0815 18:37:37.500098   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0815 18:37:37.500120   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:37.500072   67936 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.929150036s)
	I0815 18:37:37.500208   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0815 18:37:37.500161   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:36.157239   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:38.656856   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:38.158685   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:40.158728   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:37.901353   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.401426   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.901599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.401173   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.901593   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.401758   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.401698   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:42.401409   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.563532   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.063281797s)
	I0815 18:37:39.563562   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0815 18:37:39.563595   67936 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:39.563642   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:40.208180   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 18:37:40.208232   67936 cache_images.go:123] Successfully loaded all cached images
	I0815 18:37:40.208240   67936 cache_images.go:92] duration metric: took 15.723882738s to LoadCachedImages
	I0815 18:37:40.208252   67936 kubeadm.go:934] updating node { 192.168.72.14 8443 v1.31.0 crio true true} ...
	I0815 18:37:40.208416   67936 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-599042 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:37:40.208544   67936 ssh_runner.go:195] Run: crio config
	I0815 18:37:40.261526   67936 cni.go:84] Creating CNI manager for ""
	I0815 18:37:40.261545   67936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:40.261552   67936 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:37:40.261572   67936 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.14 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-599042 NodeName:no-preload-599042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:37:40.261688   67936 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-599042"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:37:40.261742   67936 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:37:40.271844   67936 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:37:40.271921   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:37:40.280957   67936 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0815 18:37:40.297378   67936 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:37:40.313215   67936 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0815 18:37:40.329640   67936 ssh_runner.go:195] Run: grep 192.168.72.14	control-plane.minikube.internal$ /etc/hosts
	I0815 18:37:40.333331   67936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:40.344805   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:40.457352   67936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:40.475219   67936 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042 for IP: 192.168.72.14
	I0815 18:37:40.475238   67936 certs.go:194] generating shared ca certs ...
	I0815 18:37:40.475252   67936 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:40.475416   67936 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:37:40.475475   67936 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:37:40.475489   67936 certs.go:256] generating profile certs ...
	I0815 18:37:40.475591   67936 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.key
	I0815 18:37:40.475670   67936 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.key.15ba6898
	I0815 18:37:40.475714   67936 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.key
	I0815 18:37:40.475865   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:37:40.475904   67936 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:37:40.475917   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:37:40.475950   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:37:40.475978   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:37:40.476012   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:37:40.476069   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:40.476738   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:37:40.513554   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:37:40.549095   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:37:40.578010   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:37:40.612637   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 18:37:40.639974   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 18:37:40.672937   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:37:40.696889   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 18:37:40.721258   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:37:40.744104   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:37:40.766463   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:37:40.788628   67936 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:37:40.805346   67936 ssh_runner.go:195] Run: openssl version
	I0815 18:37:40.811193   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:37:40.822610   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.826918   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.826969   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.832544   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:37:40.843338   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:37:40.854032   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.858512   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.858563   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.864247   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:37:40.874724   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:37:40.885538   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.889849   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.889899   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.895258   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:37:40.906841   67936 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:37:40.911629   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:37:40.918085   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:37:40.924194   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:37:40.930009   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:37:40.935634   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:37:40.941168   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:37:40.946761   67936 kubeadm.go:392] StartCluster: {Name:no-preload-599042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:37:40.946836   67936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:37:40.946874   67936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:40.990733   67936 cri.go:89] found id: ""
	I0815 18:37:40.990808   67936 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:37:41.002969   67936 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:37:41.002988   67936 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:37:41.003041   67936 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:37:41.013722   67936 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:37:41.015079   67936 kubeconfig.go:125] found "no-preload-599042" server: "https://192.168.72.14:8443"
	I0815 18:37:41.017905   67936 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:37:41.029240   67936 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.14
	I0815 18:37:41.029271   67936 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:37:41.029284   67936 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:37:41.029326   67936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:41.064689   67936 cri.go:89] found id: ""
	I0815 18:37:41.064754   67936 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:37:41.085195   67936 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:37:41.096355   67936 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:37:41.096375   67936 kubeadm.go:157] found existing configuration files:
	
	I0815 18:37:41.096425   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:37:41.106887   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:37:41.106941   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:37:41.117599   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:37:41.127956   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:37:41.128020   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:37:41.137384   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:37:41.146075   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:37:41.146122   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:37:41.156417   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:37:41.165287   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:37:41.165325   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:37:41.174245   67936 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:37:41.183335   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:41.314804   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.422591   67936 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.107749325s)
	I0815 18:37:42.422628   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.642065   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.710265   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.791233   67936 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:37:42.791334   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.291538   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.791682   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.831611   67936 api_server.go:72] duration metric: took 1.040390925s to wait for apiserver process to appear ...
	I0815 18:37:43.831641   67936 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:37:43.831662   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:43.832110   67936 api_server.go:269] stopped: https://192.168.72.14:8443/healthz: Get "https://192.168.72.14:8443/healthz": dial tcp 192.168.72.14:8443: connect: connection refused
	I0815 18:37:41.154701   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:43.655756   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:42.661385   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:45.158918   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:42.901106   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.401146   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.901869   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.401483   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.901302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.401505   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.901504   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.401025   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.901713   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:47.401588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.332554   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.112640   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:37:47.112668   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:37:47.112681   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.244211   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.244246   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:47.332375   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.339129   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.339153   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:47.831731   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.836308   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.836330   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:48.331914   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:48.336314   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:48.336347   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:48.831862   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:48.836012   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 200:
	ok
	I0815 18:37:48.842971   67936 api_server.go:141] control plane version: v1.31.0
	I0815 18:37:48.842996   67936 api_server.go:131] duration metric: took 5.011346791s to wait for apiserver health ...
	I0815 18:37:48.843008   67936 cni.go:84] Creating CNI manager for ""
	I0815 18:37:48.843015   67936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:48.844939   67936 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:37:48.846262   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:37:48.857335   67936 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:37:48.876370   67936 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:37:48.886582   67936 system_pods.go:59] 8 kube-system pods found
	I0815 18:37:48.886628   67936 system_pods.go:61] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:37:48.886640   67936 system_pods.go:61] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:37:48.886653   67936 system_pods.go:61] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:37:48.886666   67936 system_pods.go:61] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:37:48.886679   67936 system_pods.go:61] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 18:37:48.886691   67936 system_pods.go:61] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:37:48.886701   67936 system_pods.go:61] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:37:48.886711   67936 system_pods.go:61] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 18:37:48.886722   67936 system_pods.go:74] duration metric: took 10.329234ms to wait for pod list to return data ...
	I0815 18:37:48.886736   67936 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:37:48.890525   67936 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:37:48.890560   67936 node_conditions.go:123] node cpu capacity is 2
	I0815 18:37:48.890571   67936 node_conditions.go:105] duration metric: took 3.828616ms to run NodePressure ...
	I0815 18:37:48.890590   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:46.155548   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:48.655549   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:49.183845   67936 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:37:49.188602   67936 kubeadm.go:739] kubelet initialised
	I0815 18:37:49.188629   67936 kubeadm.go:740] duration metric: took 4.755371ms waiting for restarted kubelet to initialise ...
	I0815 18:37:49.188639   67936 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:49.193101   67936 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.199195   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.199215   67936 pod_ready.go:82] duration metric: took 6.088761ms for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.199226   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.199236   67936 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.205076   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "etcd-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.205095   67936 pod_ready.go:82] duration metric: took 5.848521ms for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.205105   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "etcd-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.205111   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.210559   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-apiserver-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.210578   67936 pod_ready.go:82] duration metric: took 5.449861ms for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.210587   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-apiserver-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.210594   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.281799   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.281828   67936 pod_ready.go:82] duration metric: took 71.206144ms for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.281840   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.281850   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.680097   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-proxy-bwb9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.680121   67936 pod_ready.go:82] duration metric: took 398.261641ms for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.680131   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-proxy-bwb9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.680136   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:50.080391   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-scheduler-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.080415   67936 pod_ready.go:82] duration metric: took 400.272871ms for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:50.080425   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-scheduler-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.080430   67936 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:50.482715   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.482744   67936 pod_ready.go:82] duration metric: took 402.304556ms for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:50.482753   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.482761   67936 pod_ready.go:39] duration metric: took 1.294109816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:50.482779   67936 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:37:50.495888   67936 ops.go:34] apiserver oom_adj: -16
	I0815 18:37:50.495912   67936 kubeadm.go:597] duration metric: took 9.4929178s to restartPrimaryControlPlane
	I0815 18:37:50.495924   67936 kubeadm.go:394] duration metric: took 9.549167115s to StartCluster
	I0815 18:37:50.495943   67936 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:50.496020   67936 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:37:50.497743   67936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:50.497976   67936 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:37:50.498166   67936 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:37:50.498225   67936 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:37:50.498251   67936 addons.go:69] Setting storage-provisioner=true in profile "no-preload-599042"
	I0815 18:37:50.498266   67936 addons.go:69] Setting default-storageclass=true in profile "no-preload-599042"
	I0815 18:37:50.498287   67936 addons.go:234] Setting addon storage-provisioner=true in "no-preload-599042"
	I0815 18:37:50.498303   67936 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-599042"
	W0815 18:37:50.498311   67936 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:37:50.498343   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.498708   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.498733   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.498745   67936 addons.go:69] Setting metrics-server=true in profile "no-preload-599042"
	I0815 18:37:50.498753   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.498783   67936 addons.go:234] Setting addon metrics-server=true in "no-preload-599042"
	W0815 18:37:50.498795   67936 addons.go:243] addon metrics-server should already be in state true
	I0815 18:37:50.498734   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.499070   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.499350   67936 out.go:177] * Verifying Kubernetes components...
	I0815 18:37:50.499436   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.499467   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.500629   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:50.514727   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
	I0815 18:37:50.514956   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0815 18:37:50.515112   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.515379   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.515622   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.515639   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.515844   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.515866   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.516028   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.516697   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.516741   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.516854   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.517455   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.517487   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.517879   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0815 18:37:50.518180   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.518645   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.518666   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.518975   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.519155   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.522283   67936 addons.go:234] Setting addon default-storageclass=true in "no-preload-599042"
	W0815 18:37:50.522301   67936 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:37:50.522321   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.522589   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.522616   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.533306   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I0815 18:37:50.533891   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.534378   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.534403   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.535077   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.535251   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.536333   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I0815 18:37:50.536960   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.537421   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.537484   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.537500   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.537581   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40905
	I0815 18:37:50.537832   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.537992   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.538044   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.538964   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.538983   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.539442   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.539494   67936 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:37:50.540127   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.540138   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.540166   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.540633   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:37:50.540653   67936 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:37:50.540673   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.541641   67936 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:47.658449   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:50.159642   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:50.542848   67936 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:37:50.542867   67936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:37:50.542883   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.544059   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.544644   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.544669   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.544879   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.545056   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.545226   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.545363   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.545609   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.545957   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.545984   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.546188   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.546350   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.546459   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.546563   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.576049   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0815 18:37:50.576398   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.576963   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.576991   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.577315   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.577536   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.579041   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.579244   67936 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:37:50.579259   67936 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:37:50.579273   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.583471   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.583857   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.583884   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.583984   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.584140   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.584298   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.584431   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.711232   67936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:50.738297   67936 node_ready.go:35] waiting up to 6m0s for node "no-preload-599042" to be "Ready" ...
	I0815 18:37:50.787041   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:37:50.876459   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:37:50.926707   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:37:50.926727   67936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:37:50.967734   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:37:50.967764   67936 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:37:50.994557   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:50.994580   67936 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:37:51.018573   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:51.217167   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.217199   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.217511   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.217561   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.217570   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.217579   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.217592   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.217846   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.217889   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.217900   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.223755   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.223774   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.224006   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.224024   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.794888   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.794919   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.795198   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.795227   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.795240   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.795256   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.795267   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.795503   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.795521   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936158   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.936178   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.936438   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.936467   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.936505   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936519   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.936528   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.936754   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.936773   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936785   67936 addons.go:475] Verifying addon metrics-server=true in "no-preload-599042"
	I0815 18:37:51.938619   67936 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0815 18:37:47.901026   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.401023   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.901661   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.401358   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.901410   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.401040   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.901695   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.401365   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.901733   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:52.401439   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.939743   67936 addons.go:510] duration metric: took 1.441583595s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0815 18:37:52.742152   67936 node_ready.go:53] node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:51.155350   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:53.654487   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:52.658151   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:54.658269   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:52.901361   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.401417   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.901380   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.401820   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.901113   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.401270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.900941   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.901834   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:57.401496   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.242506   67936 node_ready.go:53] node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:57.742723   67936 node_ready.go:49] node "no-preload-599042" has status "Ready":"True"
	I0815 18:37:57.742746   67936 node_ready.go:38] duration metric: took 7.00442012s for node "no-preload-599042" to be "Ready" ...
	I0815 18:37:57.742764   67936 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:57.747927   67936 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:57.752478   67936 pod_ready.go:93] pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:57.752513   67936 pod_ready.go:82] duration metric: took 4.560553ms for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:57.752524   67936 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.760896   67936 pod_ready.go:93] pod "etcd-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.760924   67936 pod_ready.go:82] duration metric: took 1.008390436s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.760937   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.774529   67936 pod_ready.go:93] pod "kube-apiserver-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.774557   67936 pod_ready.go:82] duration metric: took 13.611063ms for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.774568   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.793851   67936 pod_ready.go:93] pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.793873   67936 pod_ready.go:82] duration metric: took 19.297089ms for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.793885   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.943096   67936 pod_ready.go:93] pod "kube-proxy-bwb9h" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.943120   67936 pod_ready.go:82] duration metric: took 149.227014ms for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.943129   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:56.154874   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:58.655280   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:57.158586   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:59.159257   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:57.901938   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.401246   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.900950   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.400984   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.401707   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.901455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.901613   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:02.401302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.342426   67936 pod_ready.go:93] pod "kube-scheduler-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:59.342447   67936 pod_ready.go:82] duration metric: took 399.312035ms for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:59.342460   67936 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	I0815 18:38:01.349419   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:03.848558   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:01.154194   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:03.154779   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:01.658502   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:04.158895   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:02.901914   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.401357   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.901258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.400961   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.401852   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.901115   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.401170   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.901694   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:07.401816   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.849586   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:08.349057   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:05.155847   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:07.653607   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:09.654245   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:06.658092   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:08.659361   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:07.900966   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.401136   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.901534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.400982   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.901126   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.401120   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.901175   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.401704   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.901710   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:12.401712   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.349443   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:12.349942   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:11.655212   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:14.154508   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:11.158562   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:13.657985   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:15.658088   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:12.901680   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.401532   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.901198   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:13.901295   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:13.938743   68713 cri.go:89] found id: ""
	I0815 18:38:13.938770   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.938778   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:13.938786   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:13.938843   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:13.971997   68713 cri.go:89] found id: ""
	I0815 18:38:13.972029   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.972041   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:13.972048   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:13.972111   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:14.006793   68713 cri.go:89] found id: ""
	I0815 18:38:14.006825   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.006836   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:14.006844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:14.006903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:14.041546   68713 cri.go:89] found id: ""
	I0815 18:38:14.041575   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.041587   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:14.041595   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:14.041680   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:14.077614   68713 cri.go:89] found id: ""
	I0815 18:38:14.077639   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.077648   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:14.077653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:14.077704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:14.113683   68713 cri.go:89] found id: ""
	I0815 18:38:14.113711   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.113721   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:14.113730   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:14.113790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:14.149581   68713 cri.go:89] found id: ""
	I0815 18:38:14.149608   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.149616   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:14.149622   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:14.149678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:14.191576   68713 cri.go:89] found id: ""
	I0815 18:38:14.191606   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.191614   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:14.191622   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:14.191635   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:14.243253   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:14.243287   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:14.256818   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:14.256845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:14.382914   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:14.382933   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:14.382948   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:14.461826   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:14.461859   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.005615   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:17.020977   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:17.021042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:17.070191   68713 cri.go:89] found id: ""
	I0815 18:38:17.070220   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.070232   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:17.070239   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:17.070301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:17.118582   68713 cri.go:89] found id: ""
	I0815 18:38:17.118612   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.118624   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:17.118631   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:17.118693   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:17.165380   68713 cri.go:89] found id: ""
	I0815 18:38:17.165404   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.165413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:17.165421   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:17.165483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:17.204630   68713 cri.go:89] found id: ""
	I0815 18:38:17.204660   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.204670   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:17.204678   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:17.204740   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:17.239182   68713 cri.go:89] found id: ""
	I0815 18:38:17.239210   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.239219   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:17.239226   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:17.239285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:17.276329   68713 cri.go:89] found id: ""
	I0815 18:38:17.276356   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.276367   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:17.276375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:17.276472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:17.312387   68713 cri.go:89] found id: ""
	I0815 18:38:17.312418   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.312427   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:17.312433   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:17.312485   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:17.348277   68713 cri.go:89] found id: ""
	I0815 18:38:17.348300   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.348308   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:17.348315   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:17.348334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:17.424886   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:17.424924   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.465491   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:17.465518   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:17.517687   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:17.517719   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:17.531928   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:17.531959   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:17.606987   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:14.849001   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:17.349912   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:16.155496   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:18.653621   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:18.159850   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.658717   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.107740   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:20.123194   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:20.123255   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:20.163586   68713 cri.go:89] found id: ""
	I0815 18:38:20.163608   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.163619   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:20.163627   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:20.163676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:20.200171   68713 cri.go:89] found id: ""
	I0815 18:38:20.200196   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.200204   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:20.200210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:20.200270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:20.234739   68713 cri.go:89] found id: ""
	I0815 18:38:20.234770   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.234781   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:20.234788   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:20.234849   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:20.270182   68713 cri.go:89] found id: ""
	I0815 18:38:20.270206   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.270215   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:20.270220   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:20.270281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:20.303643   68713 cri.go:89] found id: ""
	I0815 18:38:20.303672   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.303682   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:20.303690   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:20.303748   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:20.339399   68713 cri.go:89] found id: ""
	I0815 18:38:20.339431   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.339441   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:20.339449   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:20.339511   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:20.377220   68713 cri.go:89] found id: ""
	I0815 18:38:20.377245   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.377252   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:20.377258   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:20.377310   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:20.411202   68713 cri.go:89] found id: ""
	I0815 18:38:20.411238   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.411249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:20.411268   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:20.411282   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:20.462846   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:20.462879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:20.476569   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:20.476597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:20.554243   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:20.554269   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:20.554285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:20.637450   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:20.637493   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:19.849194   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:21.849502   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.655378   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.154633   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.160747   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:25.658706   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.182633   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:23.196953   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:23.197026   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:23.232011   68713 cri.go:89] found id: ""
	I0815 18:38:23.232039   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.232051   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:23.232064   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:23.232114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:23.266963   68713 cri.go:89] found id: ""
	I0815 18:38:23.266992   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.267000   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:23.267006   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:23.267069   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:23.306473   68713 cri.go:89] found id: ""
	I0815 18:38:23.306496   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.306504   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:23.306510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:23.306574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:23.343542   68713 cri.go:89] found id: ""
	I0815 18:38:23.343574   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.343585   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:23.343592   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:23.343652   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:23.382468   68713 cri.go:89] found id: ""
	I0815 18:38:23.382527   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.382539   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:23.382547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:23.382612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:23.418857   68713 cri.go:89] found id: ""
	I0815 18:38:23.418882   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.418891   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:23.418897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:23.418956   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:23.460971   68713 cri.go:89] found id: ""
	I0815 18:38:23.461004   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.461016   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:23.461023   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:23.461100   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:23.494139   68713 cri.go:89] found id: ""
	I0815 18:38:23.494172   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.494183   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:23.494194   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:23.494208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:23.547874   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:23.547908   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:23.562251   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:23.562278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:23.636503   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:23.636528   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:23.636545   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:23.716020   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:23.716051   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.255081   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:26.270118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:26.270184   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:26.308586   68713 cri.go:89] found id: ""
	I0815 18:38:26.308612   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.308623   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:26.308630   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:26.308688   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:26.344364   68713 cri.go:89] found id: ""
	I0815 18:38:26.344394   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.344410   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:26.344418   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:26.344533   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:26.381621   68713 cri.go:89] found id: ""
	I0815 18:38:26.381642   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.381650   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:26.381655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:26.381699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:26.416091   68713 cri.go:89] found id: ""
	I0815 18:38:26.416118   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.416128   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:26.416136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:26.416195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:26.456038   68713 cri.go:89] found id: ""
	I0815 18:38:26.456068   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.456080   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:26.456088   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:26.456151   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:26.490728   68713 cri.go:89] found id: ""
	I0815 18:38:26.490758   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.490769   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:26.490776   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:26.490837   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:26.529388   68713 cri.go:89] found id: ""
	I0815 18:38:26.529422   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.529434   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:26.529440   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:26.529489   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:26.567452   68713 cri.go:89] found id: ""
	I0815 18:38:26.567475   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.567484   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:26.567491   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:26.567503   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:26.641841   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:26.641863   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:26.641879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:26.719403   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:26.719438   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.760460   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:26.760507   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:26.814450   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:26.814480   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:24.349319   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:26.850207   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:25.155213   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:27.654265   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:29.656816   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:27.663849   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:30.158417   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:29.329451   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:29.344634   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:29.344706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:29.379278   68713 cri.go:89] found id: ""
	I0815 18:38:29.379308   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.379319   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:29.379326   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:29.379385   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:29.411854   68713 cri.go:89] found id: ""
	I0815 18:38:29.411881   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.411891   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:29.411898   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:29.411965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:29.443975   68713 cri.go:89] found id: ""
	I0815 18:38:29.444004   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.444014   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:29.444022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:29.444081   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:29.477919   68713 cri.go:89] found id: ""
	I0815 18:38:29.477944   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.477954   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:29.477962   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:29.478020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:29.518944   68713 cri.go:89] found id: ""
	I0815 18:38:29.518973   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.518985   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:29.518992   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:29.519052   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:29.553876   68713 cri.go:89] found id: ""
	I0815 18:38:29.553903   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.553913   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:29.553921   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:29.553974   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:29.590768   68713 cri.go:89] found id: ""
	I0815 18:38:29.590804   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.590815   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:29.590823   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:29.590879   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:29.625553   68713 cri.go:89] found id: ""
	I0815 18:38:29.625578   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.625586   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:29.625595   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:29.625606   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:29.668447   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:29.668478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:29.721002   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:29.721035   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:29.734955   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:29.734983   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:29.808703   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:29.808726   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:29.808742   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.397781   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:32.413876   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:32.413937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:32.453689   68713 cri.go:89] found id: ""
	I0815 18:38:32.453720   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.453777   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:32.453791   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:32.453839   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:32.490529   68713 cri.go:89] found id: ""
	I0815 18:38:32.490559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.490567   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:32.490573   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:32.490622   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:32.527680   68713 cri.go:89] found id: ""
	I0815 18:38:32.527710   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.527720   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:32.527727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:32.527790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:32.564619   68713 cri.go:89] found id: ""
	I0815 18:38:32.564656   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.564667   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:32.564677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:32.564745   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:32.600530   68713 cri.go:89] found id: ""
	I0815 18:38:32.600559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.600570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:32.600577   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:32.600639   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:32.636779   68713 cri.go:89] found id: ""
	I0815 18:38:32.636813   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.636821   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:32.636828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:32.636897   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:32.673743   68713 cri.go:89] found id: ""
	I0815 18:38:32.673774   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.673786   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:32.673794   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:32.673853   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:32.709678   68713 cri.go:89] found id: ""
	I0815 18:38:32.709708   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.709719   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:32.709730   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:32.709744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.785961   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:32.785998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:29.349763   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:31.350398   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:33.848873   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.155992   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:34.654825   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.159855   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:34.657783   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.828205   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:32.828237   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:32.894624   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:32.894666   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:32.910742   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:32.910769   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:32.980853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:35.481438   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:35.495373   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:35.495444   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:35.529184   68713 cri.go:89] found id: ""
	I0815 18:38:35.529212   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.529221   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:35.529226   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:35.529275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:35.565188   68713 cri.go:89] found id: ""
	I0815 18:38:35.565214   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.565221   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:35.565227   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:35.565281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:35.600386   68713 cri.go:89] found id: ""
	I0815 18:38:35.600416   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.600428   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:35.600435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:35.600519   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:35.634255   68713 cri.go:89] found id: ""
	I0815 18:38:35.634278   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.634287   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:35.634293   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:35.634339   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:35.670236   68713 cri.go:89] found id: ""
	I0815 18:38:35.670260   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.670268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:35.670273   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:35.670354   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:35.707691   68713 cri.go:89] found id: ""
	I0815 18:38:35.707714   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.707722   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:35.707727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:35.707782   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:35.745791   68713 cri.go:89] found id: ""
	I0815 18:38:35.745820   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.745832   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:35.745844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:35.745916   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:35.784167   68713 cri.go:89] found id: ""
	I0815 18:38:35.784195   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.784205   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:35.784217   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:35.784234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:35.864681   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:35.864711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:35.906831   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:35.906858   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:35.960328   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:35.960366   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:35.974401   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:35.974428   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:36.044789   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:35.849744   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:38.348058   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:36.654916   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:39.155585   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:36.658767   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:39.159236   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:38.545951   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:38.561473   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:38.561540   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:38.597621   68713 cri.go:89] found id: ""
	I0815 18:38:38.597658   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.597668   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:38.597679   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:38.597756   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:38.632081   68713 cri.go:89] found id: ""
	I0815 18:38:38.632115   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.632127   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:38.632135   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:38.632203   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:38.669917   68713 cri.go:89] found id: ""
	I0815 18:38:38.669944   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.669952   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:38.669958   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:38.670015   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:38.707552   68713 cri.go:89] found id: ""
	I0815 18:38:38.707574   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.707582   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:38.707588   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:38.707642   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:38.746054   68713 cri.go:89] found id: ""
	I0815 18:38:38.746082   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.746093   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:38.746101   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:38.746166   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:38.783901   68713 cri.go:89] found id: ""
	I0815 18:38:38.783933   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.783945   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:38.783952   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:38.784018   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:38.825411   68713 cri.go:89] found id: ""
	I0815 18:38:38.825441   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.825452   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:38.825459   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:38.825520   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:38.863174   68713 cri.go:89] found id: ""
	I0815 18:38:38.863219   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.863231   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:38.863241   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:38.863254   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:38.914016   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:38.914045   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:38.927634   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:38.927659   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:38.993380   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:38.993407   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:38.993422   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:39.077075   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:39.077116   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:41.620219   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:41.633572   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:41.633628   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:41.670330   68713 cri.go:89] found id: ""
	I0815 18:38:41.670351   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.670358   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:41.670364   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:41.670418   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:41.706467   68713 cri.go:89] found id: ""
	I0815 18:38:41.706494   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.706502   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:41.706508   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:41.706564   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:41.742915   68713 cri.go:89] found id: ""
	I0815 18:38:41.742958   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.742970   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:41.742978   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:41.743044   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:41.778650   68713 cri.go:89] found id: ""
	I0815 18:38:41.778679   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.778687   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:41.778692   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:41.778739   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:41.813329   68713 cri.go:89] found id: ""
	I0815 18:38:41.813358   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.813369   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:41.813375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:41.813427   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:41.851351   68713 cri.go:89] found id: ""
	I0815 18:38:41.851383   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.851391   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:41.851398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:41.851460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:41.895097   68713 cri.go:89] found id: ""
	I0815 18:38:41.895130   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.895142   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:41.895150   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:41.895209   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:41.931306   68713 cri.go:89] found id: ""
	I0815 18:38:41.931336   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.931353   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:41.931365   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:41.931381   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:41.944796   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:41.944828   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:42.018868   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:42.018891   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:42.018903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:42.104304   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:42.104340   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:42.143625   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:42.143655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:40.349197   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:42.850034   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:41.655478   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:44.155025   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:41.159976   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:43.658013   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:45.658358   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:44.698568   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:44.712171   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:44.712247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:44.747043   68713 cri.go:89] found id: ""
	I0815 18:38:44.747071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.747079   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:44.747085   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:44.747143   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:44.782660   68713 cri.go:89] found id: ""
	I0815 18:38:44.782691   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.782703   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:44.782711   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:44.782765   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:44.821111   68713 cri.go:89] found id: ""
	I0815 18:38:44.821138   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.821146   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:44.821152   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:44.821222   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:44.859602   68713 cri.go:89] found id: ""
	I0815 18:38:44.859635   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.859647   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:44.859655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:44.859717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:44.895037   68713 cri.go:89] found id: ""
	I0815 18:38:44.895071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.895083   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:44.895090   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:44.895175   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:44.928729   68713 cri.go:89] found id: ""
	I0815 18:38:44.928759   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.928771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:44.928781   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:44.928844   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:44.963945   68713 cri.go:89] found id: ""
	I0815 18:38:44.963977   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.963987   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:44.963996   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:44.964060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:45.001166   68713 cri.go:89] found id: ""
	I0815 18:38:45.001195   68713 logs.go:276] 0 containers: []
	W0815 18:38:45.001206   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:45.001218   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:45.001234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:45.015181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:45.015209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:45.084297   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:45.084322   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:45.084334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:45.173833   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:45.173866   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:45.211863   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:45.211899   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:47.771009   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:47.784865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:47.784926   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:44.850332   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.347985   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:46.654797   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:48.654936   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.658823   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:50.178115   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.818497   68713 cri.go:89] found id: ""
	I0815 18:38:47.818526   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.818538   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:47.818545   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:47.818608   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:47.857900   68713 cri.go:89] found id: ""
	I0815 18:38:47.857927   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.857935   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:47.857941   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:47.857997   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:47.895778   68713 cri.go:89] found id: ""
	I0815 18:38:47.895809   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.895822   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:47.895829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:47.895887   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:47.937410   68713 cri.go:89] found id: ""
	I0815 18:38:47.937434   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.937442   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:47.937448   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:47.937505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:47.976414   68713 cri.go:89] found id: ""
	I0815 18:38:47.976442   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.976450   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:47.976455   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:47.976525   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:48.014863   68713 cri.go:89] found id: ""
	I0815 18:38:48.014891   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.014899   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:48.014906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:48.014969   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:48.053508   68713 cri.go:89] found id: ""
	I0815 18:38:48.053536   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.053546   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:48.053554   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:48.053624   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:48.088900   68713 cri.go:89] found id: ""
	I0815 18:38:48.088931   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.088943   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:48.088954   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:48.088969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:48.140415   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:48.140447   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:48.155958   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:48.155985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:48.229338   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:48.229368   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:48.229383   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:48.317776   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:48.317814   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:50.860592   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:50.877070   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:50.877154   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:50.937590   68713 cri.go:89] found id: ""
	I0815 18:38:50.937615   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.937622   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:50.937628   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:50.937687   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:50.972573   68713 cri.go:89] found id: ""
	I0815 18:38:50.972603   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.972614   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:50.972622   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:50.972683   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:51.008786   68713 cri.go:89] found id: ""
	I0815 18:38:51.008811   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.008820   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:51.008826   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:51.008875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:51.043076   68713 cri.go:89] found id: ""
	I0815 18:38:51.043105   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.043116   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:51.043123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:51.043186   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:51.078344   68713 cri.go:89] found id: ""
	I0815 18:38:51.078379   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.078391   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:51.078398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:51.078453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:51.114494   68713 cri.go:89] found id: ""
	I0815 18:38:51.114521   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.114532   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:51.114540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:51.114600   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:51.153871   68713 cri.go:89] found id: ""
	I0815 18:38:51.153898   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.153909   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:51.153917   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:51.153984   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:51.187908   68713 cri.go:89] found id: ""
	I0815 18:38:51.187937   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.187948   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:51.187959   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:51.187974   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:51.264172   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:51.264198   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:51.264214   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:51.345238   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:51.345285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:51.385720   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:51.385745   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:51.443313   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:51.443353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:49.849156   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:52.348628   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:51.154188   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:53.155256   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:52.658773   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:54.659127   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:53.959176   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:53.972031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:53.972101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:54.010673   68713 cri.go:89] found id: ""
	I0815 18:38:54.010699   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.010707   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:54.010714   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:54.010775   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:54.045632   68713 cri.go:89] found id: ""
	I0815 18:38:54.045662   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.045672   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:54.045678   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:54.045727   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:54.082111   68713 cri.go:89] found id: ""
	I0815 18:38:54.082134   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.082142   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:54.082148   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:54.082206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:54.118210   68713 cri.go:89] found id: ""
	I0815 18:38:54.118232   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.118239   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:54.118246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:54.118305   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:54.155474   68713 cri.go:89] found id: ""
	I0815 18:38:54.155499   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.155508   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:54.155515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:54.155572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:54.193263   68713 cri.go:89] found id: ""
	I0815 18:38:54.193298   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.193305   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:54.193311   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:54.193365   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:54.233389   68713 cri.go:89] found id: ""
	I0815 18:38:54.233416   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.233428   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:54.233435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:54.233502   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:54.266127   68713 cri.go:89] found id: ""
	I0815 18:38:54.266155   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.266164   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:54.266176   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:54.266199   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:54.318724   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:54.318762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:54.332993   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:54.333022   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:54.405895   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:54.405915   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:54.405926   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:54.485819   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:54.485875   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.024956   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:57.038182   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:57.038246   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:57.078020   68713 cri.go:89] found id: ""
	I0815 18:38:57.078044   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.078055   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:57.078063   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:57.078127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:57.115077   68713 cri.go:89] found id: ""
	I0815 18:38:57.115101   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.115110   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:57.115118   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:57.115179   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:57.152711   68713 cri.go:89] found id: ""
	I0815 18:38:57.152737   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.152747   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:57.152755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:57.152819   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:57.190000   68713 cri.go:89] found id: ""
	I0815 18:38:57.190034   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.190042   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:57.190048   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:57.190096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:57.224947   68713 cri.go:89] found id: ""
	I0815 18:38:57.224978   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.224990   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:57.224998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:57.225060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:57.262329   68713 cri.go:89] found id: ""
	I0815 18:38:57.262365   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.262375   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:57.262383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:57.262458   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:57.299471   68713 cri.go:89] found id: ""
	I0815 18:38:57.299498   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.299507   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:57.299513   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:57.299572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:57.357163   68713 cri.go:89] found id: ""
	I0815 18:38:57.357202   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.357211   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:57.357220   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:57.357236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.405154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:57.405184   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:57.459245   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:57.459277   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:57.473663   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:57.473699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:57.546670   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:57.546699   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:57.546715   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:54.348864   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:56.848276   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:58.849461   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:55.655015   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:58.158306   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:56.662722   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:59.159559   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:00.124455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:00.137985   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:00.138053   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:00.175201   68713 cri.go:89] found id: ""
	I0815 18:39:00.175231   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.175242   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:00.175250   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:00.175328   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:00.209376   68713 cri.go:89] found id: ""
	I0815 18:39:00.209406   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.209418   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:00.209426   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:00.209484   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:00.246860   68713 cri.go:89] found id: ""
	I0815 18:39:00.246889   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.246899   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:00.246906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:00.246965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:00.282787   68713 cri.go:89] found id: ""
	I0815 18:39:00.282814   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.282823   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:00.282829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:00.282875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:00.330227   68713 cri.go:89] found id: ""
	I0815 18:39:00.330256   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.330268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:00.330275   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:00.330338   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:00.363028   68713 cri.go:89] found id: ""
	I0815 18:39:00.363061   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.363072   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:00.363079   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:00.363169   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:00.400484   68713 cri.go:89] found id: ""
	I0815 18:39:00.400522   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.400533   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:00.400540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:00.400597   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:00.436187   68713 cri.go:89] found id: ""
	I0815 18:39:00.436225   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.436238   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:00.436252   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:00.436267   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:00.481960   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:00.481985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:00.535103   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:00.535138   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:00.548541   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:00.548568   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:00.619476   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:00.619505   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:00.619525   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:01.347916   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.349448   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:00.654384   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.155048   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:01.658374   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.658824   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.206473   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:03.222893   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:03.222967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:03.272249   68713 cri.go:89] found id: ""
	I0815 18:39:03.272275   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.272283   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:03.272291   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:03.272355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:03.336786   68713 cri.go:89] found id: ""
	I0815 18:39:03.336811   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.336819   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:03.336825   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:03.336884   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:03.375977   68713 cri.go:89] found id: ""
	I0815 18:39:03.376002   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.376011   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:03.376016   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:03.376063   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:03.410304   68713 cri.go:89] found id: ""
	I0815 18:39:03.410326   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.410335   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:03.410340   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:03.410403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:03.446147   68713 cri.go:89] found id: ""
	I0815 18:39:03.446176   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.446188   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:03.446195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:03.446256   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:03.482555   68713 cri.go:89] found id: ""
	I0815 18:39:03.482582   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.482591   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:03.482597   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:03.482654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:03.519477   68713 cri.go:89] found id: ""
	I0815 18:39:03.519503   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.519511   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:03.519517   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:03.519574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:03.556539   68713 cri.go:89] found id: ""
	I0815 18:39:03.556566   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.556577   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:03.556587   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:03.556602   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:03.610553   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:03.610593   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:03.625799   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:03.625827   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:03.697106   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:03.697132   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:03.697149   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:03.779089   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:03.779120   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:06.319280   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:06.333284   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:06.333355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:06.369696   68713 cri.go:89] found id: ""
	I0815 18:39:06.369719   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.369727   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:06.369732   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:06.369780   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:06.405023   68713 cri.go:89] found id: ""
	I0815 18:39:06.405046   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.405053   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:06.405059   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:06.405113   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:06.439948   68713 cri.go:89] found id: ""
	I0815 18:39:06.439974   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.439983   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:06.439989   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:06.440048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:06.475613   68713 cri.go:89] found id: ""
	I0815 18:39:06.475642   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.475654   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:06.475664   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:06.475723   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:06.510698   68713 cri.go:89] found id: ""
	I0815 18:39:06.510721   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.510729   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:06.510735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:06.510783   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:06.545831   68713 cri.go:89] found id: ""
	I0815 18:39:06.545861   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.545873   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:06.545880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:06.545940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:06.579027   68713 cri.go:89] found id: ""
	I0815 18:39:06.579053   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.579064   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:06.579072   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:06.579132   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:06.615308   68713 cri.go:89] found id: ""
	I0815 18:39:06.615339   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.615352   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:06.615371   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:06.615396   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:06.671523   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:06.671555   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:06.685556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:06.685580   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:06.765036   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:06.765059   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:06.765071   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:06.843412   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:06.843457   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:05.849018   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:07.849342   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:05.654854   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:07.654910   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:09.655240   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:06.158409   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:08.657902   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:10.658258   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:09.390799   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:09.404099   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:09.404160   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:09.439534   68713 cri.go:89] found id: ""
	I0815 18:39:09.439563   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.439582   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:09.439591   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:09.439654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:09.478933   68713 cri.go:89] found id: ""
	I0815 18:39:09.478963   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.478974   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:09.478982   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:09.479042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:09.514396   68713 cri.go:89] found id: ""
	I0815 18:39:09.514425   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.514436   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:09.514444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:09.514510   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:09.547749   68713 cri.go:89] found id: ""
	I0815 18:39:09.547775   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.547785   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:09.547793   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:09.547856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:09.583583   68713 cri.go:89] found id: ""
	I0815 18:39:09.583611   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.583623   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:09.583631   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:09.583695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:09.616530   68713 cri.go:89] found id: ""
	I0815 18:39:09.616560   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.616570   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:09.616576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:09.616641   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:09.655167   68713 cri.go:89] found id: ""
	I0815 18:39:09.655189   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.655199   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:09.655207   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:09.655263   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:09.691368   68713 cri.go:89] found id: ""
	I0815 18:39:09.691391   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.691398   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:09.691411   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:09.691426   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:09.740739   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:09.740770   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:09.755049   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:09.755074   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:09.825053   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:09.825080   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:09.825095   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:09.903036   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:09.903076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:12.441898   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:12.454637   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:12.454712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:12.494604   68713 cri.go:89] found id: ""
	I0815 18:39:12.494632   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.494640   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:12.494646   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:12.494699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:12.531587   68713 cri.go:89] found id: ""
	I0815 18:39:12.531631   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.531642   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:12.531649   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:12.531710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:12.564991   68713 cri.go:89] found id: ""
	I0815 18:39:12.565014   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.565021   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:12.565027   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:12.565096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:12.600667   68713 cri.go:89] found id: ""
	I0815 18:39:12.600698   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.600709   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:12.600715   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:12.600777   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:12.633658   68713 cri.go:89] found id: ""
	I0815 18:39:12.633681   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.633691   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:12.633698   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:12.633759   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:12.673709   68713 cri.go:89] found id: ""
	I0815 18:39:12.673730   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.673737   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:12.673743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:12.673790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:12.707353   68713 cri.go:89] found id: ""
	I0815 18:39:12.707378   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.707385   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:12.707390   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:12.707437   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:12.746926   68713 cri.go:89] found id: ""
	I0815 18:39:12.746949   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.746957   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:12.746965   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:12.746977   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:09.853116   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:12.348297   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:11.655347   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:14.154929   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:13.158257   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:15.158457   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:12.792154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:12.792180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:12.843933   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:12.843968   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:12.859583   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:12.859609   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:12.940856   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:12.940880   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:12.940895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:15.520265   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:15.533677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:15.533754   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:15.572109   68713 cri.go:89] found id: ""
	I0815 18:39:15.572135   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.572145   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:15.572153   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:15.572221   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:15.607442   68713 cri.go:89] found id: ""
	I0815 18:39:15.607472   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.607484   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:15.607492   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:15.607551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:15.642206   68713 cri.go:89] found id: ""
	I0815 18:39:15.642230   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.642238   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:15.642246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:15.642308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:15.677914   68713 cri.go:89] found id: ""
	I0815 18:39:15.677945   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.677956   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:15.677963   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:15.678049   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:15.714466   68713 cri.go:89] found id: ""
	I0815 18:39:15.714496   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.714504   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:15.714510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:15.714563   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:15.750961   68713 cri.go:89] found id: ""
	I0815 18:39:15.750987   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.750995   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:15.751002   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:15.751050   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:15.785399   68713 cri.go:89] found id: ""
	I0815 18:39:15.785434   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.785444   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:15.785450   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:15.785498   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:15.821547   68713 cri.go:89] found id: ""
	I0815 18:39:15.821571   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.821578   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:15.821586   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:15.821597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:15.875299   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:15.875329   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:15.890376   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:15.890408   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:15.957317   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:15.957337   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:15.957352   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:16.033952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:16.033997   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:14.349171   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:16.849292   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.850822   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:16.654572   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.656041   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:17.657984   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:19.658366   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.571953   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:18.584652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:18.584721   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:18.617043   68713 cri.go:89] found id: ""
	I0815 18:39:18.617066   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.617073   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:18.617079   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:18.617127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:18.651080   68713 cri.go:89] found id: ""
	I0815 18:39:18.651112   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.651123   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:18.651130   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:18.651187   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:18.686857   68713 cri.go:89] found id: ""
	I0815 18:39:18.686890   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.686901   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:18.686909   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:18.686975   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:18.719397   68713 cri.go:89] found id: ""
	I0815 18:39:18.719434   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.719444   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:18.719452   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:18.719521   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:18.758316   68713 cri.go:89] found id: ""
	I0815 18:39:18.758349   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.758360   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:18.758366   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:18.758435   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:18.791586   68713 cri.go:89] found id: ""
	I0815 18:39:18.791609   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.791617   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:18.791623   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:18.791690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:18.827905   68713 cri.go:89] found id: ""
	I0815 18:39:18.827929   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.827937   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:18.827945   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:18.828004   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:18.869371   68713 cri.go:89] found id: ""
	I0815 18:39:18.869404   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.869412   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:18.869420   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:18.869432   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:18.920124   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:18.920158   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:18.936137   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:18.936168   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:19.006877   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:19.006902   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:19.006913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:19.088909   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:19.088953   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:21.632734   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:21.647246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:21.647322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:21.685574   68713 cri.go:89] found id: ""
	I0815 18:39:21.685606   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.685614   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:21.685620   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:21.685676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:21.717073   68713 cri.go:89] found id: ""
	I0815 18:39:21.717112   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.717124   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:21.717133   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:21.717205   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:21.752072   68713 cri.go:89] found id: ""
	I0815 18:39:21.752101   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.752112   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:21.752120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:21.752182   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:21.786811   68713 cri.go:89] found id: ""
	I0815 18:39:21.786834   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.786842   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:21.786848   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:21.786893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:21.823694   68713 cri.go:89] found id: ""
	I0815 18:39:21.823719   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.823728   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:21.823733   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:21.823790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:21.859358   68713 cri.go:89] found id: ""
	I0815 18:39:21.859387   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.859398   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:21.859405   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:21.859469   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:21.893723   68713 cri.go:89] found id: ""
	I0815 18:39:21.893751   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.893761   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:21.893769   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:21.893826   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:21.929338   68713 cri.go:89] found id: ""
	I0815 18:39:21.929368   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.929379   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:21.929388   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:21.929414   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:21.979107   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:21.979141   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:21.993968   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:21.994005   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:22.063359   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:22.063384   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:22.063398   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:22.144303   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:22.144337   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:21.348202   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.349199   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:21.154244   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.155954   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:21.658572   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.658782   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:25.658946   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:24.688104   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:24.701230   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:24.701298   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:24.735056   68713 cri.go:89] found id: ""
	I0815 18:39:24.735086   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.735097   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:24.735104   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:24.735172   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:24.769704   68713 cri.go:89] found id: ""
	I0815 18:39:24.769732   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.769743   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:24.769751   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:24.769812   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:24.808763   68713 cri.go:89] found id: ""
	I0815 18:39:24.808788   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.808796   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:24.808807   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:24.808856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:24.846997   68713 cri.go:89] found id: ""
	I0815 18:39:24.847028   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.847038   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:24.847045   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:24.847106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:24.881681   68713 cri.go:89] found id: ""
	I0815 18:39:24.881705   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.881713   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:24.881719   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:24.881772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:24.917000   68713 cri.go:89] found id: ""
	I0815 18:39:24.917024   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.917032   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:24.917040   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:24.917088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:24.951133   68713 cri.go:89] found id: ""
	I0815 18:39:24.951156   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.951164   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:24.951170   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:24.951218   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:24.987306   68713 cri.go:89] found id: ""
	I0815 18:39:24.987331   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.987339   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:24.987347   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:24.987360   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:25.039533   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:25.039566   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:25.053011   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:25.053036   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:25.125864   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:25.125884   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:25.125895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:25.209885   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:25.209916   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:27.751681   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:27.765316   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:27.765390   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:25.848840   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.849344   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:25.156068   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.654722   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:28.158317   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:30.158632   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.805820   68713 cri.go:89] found id: ""
	I0815 18:39:27.805858   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.805870   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:27.805878   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:27.805940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:27.846684   68713 cri.go:89] found id: ""
	I0815 18:39:27.846717   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.846727   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:27.846737   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:27.846801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:27.882326   68713 cri.go:89] found id: ""
	I0815 18:39:27.882358   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.882370   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:27.882378   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:27.882448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:27.917340   68713 cri.go:89] found id: ""
	I0815 18:39:27.917416   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.917431   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:27.917442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:27.917505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:27.952674   68713 cri.go:89] found id: ""
	I0815 18:39:27.952700   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.952708   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:27.952714   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:27.952763   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:27.986103   68713 cri.go:89] found id: ""
	I0815 18:39:27.986132   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.986143   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:27.986151   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:27.986212   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:28.023674   68713 cri.go:89] found id: ""
	I0815 18:39:28.023716   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.023735   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:28.023742   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:28.023807   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:28.064902   68713 cri.go:89] found id: ""
	I0815 18:39:28.064929   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.064937   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:28.064945   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:28.064957   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:28.116145   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:28.116180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:28.130435   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:28.130462   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:28.204899   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:28.204920   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:28.204931   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:28.284165   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:28.284202   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:30.824135   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:30.837515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:30.837583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:30.874671   68713 cri.go:89] found id: ""
	I0815 18:39:30.874695   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.874705   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:30.874712   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:30.874776   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:30.909990   68713 cri.go:89] found id: ""
	I0815 18:39:30.910027   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.910038   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:30.910045   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:30.910106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:30.946824   68713 cri.go:89] found id: ""
	I0815 18:39:30.946851   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.946859   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:30.946865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:30.946912   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:30.983392   68713 cri.go:89] found id: ""
	I0815 18:39:30.983419   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.983429   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:30.983437   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:30.983505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:31.023471   68713 cri.go:89] found id: ""
	I0815 18:39:31.023500   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.023510   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:31.023518   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:31.023583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:31.063586   68713 cri.go:89] found id: ""
	I0815 18:39:31.063616   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.063627   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:31.063636   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:31.063696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:31.103147   68713 cri.go:89] found id: ""
	I0815 18:39:31.103173   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.103180   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:31.103186   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:31.103237   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:31.144082   68713 cri.go:89] found id: ""
	I0815 18:39:31.144113   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.144124   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:31.144136   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:31.144150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:31.212535   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:31.212563   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:31.212586   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:31.292039   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:31.292076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:31.335023   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:31.335050   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:31.388817   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:31.388853   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:30.349110   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.349209   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:30.154683   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.653806   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:34.654716   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.658245   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:34.659119   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:33.925861   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:33.939604   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:33.939668   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:33.974538   68713 cri.go:89] found id: ""
	I0815 18:39:33.974563   68713 logs.go:276] 0 containers: []
	W0815 18:39:33.974571   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:33.974577   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:33.974634   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:34.009017   68713 cri.go:89] found id: ""
	I0815 18:39:34.009048   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.009058   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:34.009064   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:34.009120   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:34.049478   68713 cri.go:89] found id: ""
	I0815 18:39:34.049501   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.049517   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:34.049523   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:34.049576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:34.091011   68713 cri.go:89] found id: ""
	I0815 18:39:34.091040   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.091050   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:34.091056   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:34.091114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:34.126617   68713 cri.go:89] found id: ""
	I0815 18:39:34.126640   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.126650   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:34.126657   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:34.126706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:34.168140   68713 cri.go:89] found id: ""
	I0815 18:39:34.168169   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.168179   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:34.168187   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:34.168279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:34.205052   68713 cri.go:89] found id: ""
	I0815 18:39:34.205081   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.205091   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:34.205098   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:34.205173   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:34.238474   68713 cri.go:89] found id: ""
	I0815 18:39:34.238499   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.238506   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:34.238521   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:34.238540   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:34.280574   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:34.280601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:34.332662   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:34.332704   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:34.348556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:34.348591   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:34.421428   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:34.421450   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:34.421464   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.004855   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:37.019306   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:37.019378   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:37.057588   68713 cri.go:89] found id: ""
	I0815 18:39:37.057618   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.057626   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:37.057641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:37.057706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:37.095645   68713 cri.go:89] found id: ""
	I0815 18:39:37.095678   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.095687   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:37.095693   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:37.095750   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:37.131669   68713 cri.go:89] found id: ""
	I0815 18:39:37.131696   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.131711   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:37.131717   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:37.131772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:37.185065   68713 cri.go:89] found id: ""
	I0815 18:39:37.185097   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.185108   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:37.185115   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:37.185180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:37.220220   68713 cri.go:89] found id: ""
	I0815 18:39:37.220251   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.220262   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:37.220269   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:37.220322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:37.259816   68713 cri.go:89] found id: ""
	I0815 18:39:37.259849   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.259859   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:37.259868   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:37.259920   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:37.292777   68713 cri.go:89] found id: ""
	I0815 18:39:37.292807   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.292818   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:37.292825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:37.292888   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:37.328673   68713 cri.go:89] found id: ""
	I0815 18:39:37.328703   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.328714   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:37.328725   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:37.328740   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:37.379131   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:37.379172   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:37.392982   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:37.393017   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:37.470727   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:37.470750   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:37.470766   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.552353   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:37.552386   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:34.849108   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:37.349765   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:36.655101   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:39.154433   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:37.158746   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:39.658907   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:40.094008   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:40.107681   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:40.107753   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:40.142229   68713 cri.go:89] found id: ""
	I0815 18:39:40.142254   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.142264   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:40.142271   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:40.142333   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:40.180622   68713 cri.go:89] found id: ""
	I0815 18:39:40.180650   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.180665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:40.180672   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:40.180733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:40.219085   68713 cri.go:89] found id: ""
	I0815 18:39:40.219113   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.219120   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:40.219126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:40.219174   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:40.254807   68713 cri.go:89] found id: ""
	I0815 18:39:40.254838   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.254850   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:40.254858   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:40.254940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:40.290438   68713 cri.go:89] found id: ""
	I0815 18:39:40.290466   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.290478   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:40.290484   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:40.290547   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:40.326320   68713 cri.go:89] found id: ""
	I0815 18:39:40.326356   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.326364   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:40.326370   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:40.326429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:40.361538   68713 cri.go:89] found id: ""
	I0815 18:39:40.361563   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.361570   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:40.361576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:40.361629   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:40.397275   68713 cri.go:89] found id: ""
	I0815 18:39:40.397304   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.397316   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:40.397326   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:40.397342   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:40.466042   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:40.466064   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:40.466078   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:40.544915   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:40.544951   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:40.584992   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:40.585019   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:40.634792   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:40.634837   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:39.848609   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:41.849831   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:41.655153   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:43.655372   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:42.159650   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:44.658547   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:43.149819   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:43.164578   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:43.164649   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:43.199576   68713 cri.go:89] found id: ""
	I0815 18:39:43.199621   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.199633   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:43.199641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:43.199702   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:43.233783   68713 cri.go:89] found id: ""
	I0815 18:39:43.233820   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.233833   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:43.233840   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:43.233911   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:43.269406   68713 cri.go:89] found id: ""
	I0815 18:39:43.269437   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.269449   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:43.269457   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:43.269570   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:43.310686   68713 cri.go:89] found id: ""
	I0815 18:39:43.310715   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.310726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:43.310734   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:43.310795   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:43.348662   68713 cri.go:89] found id: ""
	I0815 18:39:43.348689   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.348699   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:43.348706   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:43.348767   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:43.385676   68713 cri.go:89] found id: ""
	I0815 18:39:43.385714   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.385726   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:43.385737   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:43.385802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:43.422605   68713 cri.go:89] found id: ""
	I0815 18:39:43.422634   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.422645   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:43.422653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:43.422712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:43.463208   68713 cri.go:89] found id: ""
	I0815 18:39:43.463238   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.463249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:43.463260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:43.463278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:43.476637   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:43.476664   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:43.552239   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:43.552263   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:43.552278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:43.653055   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:43.653108   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:43.699166   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:43.699192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.251725   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:46.265164   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:46.265240   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:46.305095   68713 cri.go:89] found id: ""
	I0815 18:39:46.305123   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.305133   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:46.305140   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:46.305196   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:46.349744   68713 cri.go:89] found id: ""
	I0815 18:39:46.349773   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.349783   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:46.349790   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:46.349858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:46.385807   68713 cri.go:89] found id: ""
	I0815 18:39:46.385839   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.385847   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:46.385853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:46.385908   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:46.419977   68713 cri.go:89] found id: ""
	I0815 18:39:46.420011   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.420024   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:46.420031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:46.420093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:46.454852   68713 cri.go:89] found id: ""
	I0815 18:39:46.454883   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.454894   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:46.454901   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:46.454962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:46.497157   68713 cri.go:89] found id: ""
	I0815 18:39:46.497192   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.497202   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:46.497210   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:46.497271   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:46.530191   68713 cri.go:89] found id: ""
	I0815 18:39:46.530218   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.530226   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:46.530232   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:46.530282   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:46.566024   68713 cri.go:89] found id: ""
	I0815 18:39:46.566050   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.566063   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:46.566074   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:46.566103   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.621969   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:46.622004   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:46.636576   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:46.636603   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:46.706819   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:46.706842   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:46.706857   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:46.786589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:46.786634   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:44.352685   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:46.849090   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:48.849424   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:45.655900   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:48.154862   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:46.658710   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:49.157317   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:49.324853   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:49.343543   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:49.343618   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:49.396260   68713 cri.go:89] found id: ""
	I0815 18:39:49.396292   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.396303   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:49.396311   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:49.396380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:49.437579   68713 cri.go:89] found id: ""
	I0815 18:39:49.437604   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.437612   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:49.437617   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:49.437663   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:49.476206   68713 cri.go:89] found id: ""
	I0815 18:39:49.476232   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.476239   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:49.476245   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:49.476296   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:49.511324   68713 cri.go:89] found id: ""
	I0815 18:39:49.511349   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.511357   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:49.511363   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:49.511428   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:49.545875   68713 cri.go:89] found id: ""
	I0815 18:39:49.545907   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.545916   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:49.545922   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:49.545981   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:49.582176   68713 cri.go:89] found id: ""
	I0815 18:39:49.582204   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.582228   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:49.582246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:49.582309   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:49.623288   68713 cri.go:89] found id: ""
	I0815 18:39:49.623318   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.623327   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:49.623333   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:49.623394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:49.662352   68713 cri.go:89] found id: ""
	I0815 18:39:49.662377   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.662389   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:49.662399   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:49.662424   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:49.745582   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:49.745617   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:49.785256   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:49.785295   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:49.835944   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:49.835979   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:49.852859   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:49.852886   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:49.928427   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.429223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:52.442384   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:52.442460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:52.480515   68713 cri.go:89] found id: ""
	I0815 18:39:52.480543   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.480553   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:52.480558   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:52.480605   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:52.518346   68713 cri.go:89] found id: ""
	I0815 18:39:52.518382   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.518393   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:52.518401   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:52.518460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:52.557696   68713 cri.go:89] found id: ""
	I0815 18:39:52.557722   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.557731   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:52.557736   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:52.557786   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:52.590849   68713 cri.go:89] found id: ""
	I0815 18:39:52.590879   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.590890   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:52.590898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:52.590961   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:52.629950   68713 cri.go:89] found id: ""
	I0815 18:39:52.629980   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.629992   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:52.629999   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:52.630047   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:52.666039   68713 cri.go:89] found id: ""
	I0815 18:39:52.666070   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.666081   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:52.666089   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:52.666146   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:52.699917   68713 cri.go:89] found id: ""
	I0815 18:39:52.699941   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.699949   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:52.699955   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:52.700001   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:52.735944   68713 cri.go:89] found id: ""
	I0815 18:39:52.735973   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.735981   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:52.735989   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:52.736001   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:39:50.849633   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:52.850298   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:50.155118   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:52.155166   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:54.653844   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:51.159401   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:53.658513   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	W0815 18:39:52.805519   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.805537   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:52.805559   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:52.894175   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:52.894213   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:52.932974   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:52.933006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:52.984206   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:52.984244   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.498477   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:55.511319   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:55.511380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:55.544899   68713 cri.go:89] found id: ""
	I0815 18:39:55.544928   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.544936   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:55.544943   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:55.545003   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:55.578821   68713 cri.go:89] found id: ""
	I0815 18:39:55.578855   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.578864   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:55.578869   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:55.578922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:55.615392   68713 cri.go:89] found id: ""
	I0815 18:39:55.615422   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.615434   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:55.615441   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:55.615501   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:55.653456   68713 cri.go:89] found id: ""
	I0815 18:39:55.653482   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.653493   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:55.653500   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:55.653558   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:55.687716   68713 cri.go:89] found id: ""
	I0815 18:39:55.687741   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.687749   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:55.687755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:55.687802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:55.725518   68713 cri.go:89] found id: ""
	I0815 18:39:55.725543   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.725553   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:55.725561   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:55.725631   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:55.758451   68713 cri.go:89] found id: ""
	I0815 18:39:55.758479   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.758490   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:55.758498   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:55.758560   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:55.792653   68713 cri.go:89] found id: ""
	I0815 18:39:55.792680   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.792687   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:55.792699   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:55.792710   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:55.832127   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:55.832156   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:55.885255   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:55.885289   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.898980   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:55.899009   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:55.967579   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:55.967609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:55.967624   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:55.348998   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:57.349656   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:56.654840   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.655471   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:56.158348   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.658194   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:00.658852   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.543524   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:58.556338   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:58.556412   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:58.593359   68713 cri.go:89] found id: ""
	I0815 18:39:58.593390   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.593401   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:58.593409   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:58.593472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:58.628446   68713 cri.go:89] found id: ""
	I0815 18:39:58.628471   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.628481   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:58.628504   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:58.628567   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:58.663930   68713 cri.go:89] found id: ""
	I0815 18:39:58.663954   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.663964   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:58.663971   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:58.664028   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:58.701070   68713 cri.go:89] found id: ""
	I0815 18:39:58.701095   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.701103   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:58.701108   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:58.701156   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:58.734427   68713 cri.go:89] found id: ""
	I0815 18:39:58.734457   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.734468   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:58.734476   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:58.734543   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:58.769121   68713 cri.go:89] found id: ""
	I0815 18:39:58.769144   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.769152   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:58.769162   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:58.769215   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:58.805771   68713 cri.go:89] found id: ""
	I0815 18:39:58.805796   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.805803   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:58.805808   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:58.805856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:58.840288   68713 cri.go:89] found id: ""
	I0815 18:39:58.840315   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.840325   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:58.840336   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:58.840351   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:58.895856   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:58.895893   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:58.909453   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:58.909478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:58.975939   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:58.975960   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:58.975971   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:59.055318   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:59.055353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.595588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:01.608625   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:01.608690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:01.646105   68713 cri.go:89] found id: ""
	I0815 18:40:01.646133   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.646144   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:01.646151   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:01.646214   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:01.685162   68713 cri.go:89] found id: ""
	I0815 18:40:01.685192   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.685202   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:01.685210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:01.685261   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:01.721452   68713 cri.go:89] found id: ""
	I0815 18:40:01.721479   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.721499   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:01.721507   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:01.721576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:01.762288   68713 cri.go:89] found id: ""
	I0815 18:40:01.762318   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.762331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:01.762339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:01.762429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:01.800547   68713 cri.go:89] found id: ""
	I0815 18:40:01.800579   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.800590   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:01.800598   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:01.800660   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:01.839182   68713 cri.go:89] found id: ""
	I0815 18:40:01.839214   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.839223   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:01.839229   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:01.839294   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:01.875364   68713 cri.go:89] found id: ""
	I0815 18:40:01.875390   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.875398   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:01.875404   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:01.875452   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:01.910485   68713 cri.go:89] found id: ""
	I0815 18:40:01.910512   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.910521   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:01.910535   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:01.910547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.951970   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:01.951998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:02.005720   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:02.005764   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:02.020941   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:02.020969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:02.101206   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:02.101224   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:02.101236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:59.850909   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:02.349180   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:00.659366   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:03.153614   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:03.158375   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:05.159868   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:04.687482   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:04.701501   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:04.701562   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:04.739613   68713 cri.go:89] found id: ""
	I0815 18:40:04.739636   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.739644   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:04.739650   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:04.739704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:04.774419   68713 cri.go:89] found id: ""
	I0815 18:40:04.774443   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.774453   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:04.774460   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:04.774522   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:04.809516   68713 cri.go:89] found id: ""
	I0815 18:40:04.809538   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.809547   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:04.809552   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:04.809612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:04.843822   68713 cri.go:89] found id: ""
	I0815 18:40:04.843850   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.843870   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:04.843878   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:04.843942   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:04.883853   68713 cri.go:89] found id: ""
	I0815 18:40:04.883881   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.883892   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:04.883900   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:04.883962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:04.918811   68713 cri.go:89] found id: ""
	I0815 18:40:04.918838   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.918846   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:04.918852   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:04.918903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:04.953076   68713 cri.go:89] found id: ""
	I0815 18:40:04.953101   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.953110   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:04.953116   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:04.953163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:04.988219   68713 cri.go:89] found id: ""
	I0815 18:40:04.988246   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.988255   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:04.988264   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:04.988275   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:05.060859   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:05.060896   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:05.060913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:05.146768   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:05.146817   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:05.187816   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:05.187845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:05.239027   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:05.239067   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:07.754503   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:07.769608   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:07.769695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:04.849108   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:06.850409   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:05.155042   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.654547   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:09.654825   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.658972   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:10.159255   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.804435   68713 cri.go:89] found id: ""
	I0815 18:40:07.804460   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.804468   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:07.804474   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:07.804551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:07.839760   68713 cri.go:89] found id: ""
	I0815 18:40:07.839787   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.839797   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:07.839804   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:07.839868   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:07.877984   68713 cri.go:89] found id: ""
	I0815 18:40:07.878009   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.878017   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:07.878022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:07.878070   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:07.914294   68713 cri.go:89] found id: ""
	I0815 18:40:07.914319   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.914328   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:07.914336   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:07.914395   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:07.948751   68713 cri.go:89] found id: ""
	I0815 18:40:07.948777   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.948787   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:07.948795   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:07.948861   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:07.982262   68713 cri.go:89] found id: ""
	I0815 18:40:07.982288   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.982296   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:07.982302   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:07.982358   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:08.015560   68713 cri.go:89] found id: ""
	I0815 18:40:08.015588   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.015596   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:08.015602   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:08.015662   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:08.049854   68713 cri.go:89] found id: ""
	I0815 18:40:08.049878   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.049885   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:08.049893   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:08.049905   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:08.102269   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:08.102303   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:08.117181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:08.117209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:08.188586   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:08.188609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:08.188623   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:08.272204   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:08.272239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:10.813223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:10.826181   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:10.826257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:10.863728   68713 cri.go:89] found id: ""
	I0815 18:40:10.863753   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.863761   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:10.863766   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:10.863813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:10.898074   68713 cri.go:89] found id: ""
	I0815 18:40:10.898102   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.898113   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:10.898121   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:10.898183   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:10.933948   68713 cri.go:89] found id: ""
	I0815 18:40:10.933980   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.933991   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:10.933998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:10.934059   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:10.972402   68713 cri.go:89] found id: ""
	I0815 18:40:10.972428   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.972436   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:10.972442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:10.972509   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:11.006814   68713 cri.go:89] found id: ""
	I0815 18:40:11.006843   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.006851   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:11.006857   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:11.006909   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:11.042739   68713 cri.go:89] found id: ""
	I0815 18:40:11.042763   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.042771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:11.042777   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:11.042835   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:11.079132   68713 cri.go:89] found id: ""
	I0815 18:40:11.079164   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.079173   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:11.079179   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:11.079228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:11.113271   68713 cri.go:89] found id: ""
	I0815 18:40:11.113298   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.113309   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:11.113317   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:11.113328   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:11.166669   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:11.166698   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:11.180789   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:11.180815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:11.247954   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:11.247985   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:11.247999   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:11.331952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:11.331995   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:09.349194   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:11.349627   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.850439   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:11.655088   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.656674   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:12.658287   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:15.158361   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.874466   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:13.888346   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:13.888416   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:13.922542   68713 cri.go:89] found id: ""
	I0815 18:40:13.922569   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.922579   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:13.922586   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:13.922654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:13.958039   68713 cri.go:89] found id: ""
	I0815 18:40:13.958066   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.958076   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:13.958082   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:13.958131   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:13.994095   68713 cri.go:89] found id: ""
	I0815 18:40:13.994125   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.994136   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:13.994144   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:13.994195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:14.027918   68713 cri.go:89] found id: ""
	I0815 18:40:14.027949   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.027960   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:14.027969   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:14.028027   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:14.063849   68713 cri.go:89] found id: ""
	I0815 18:40:14.063879   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.063889   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:14.063897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:14.063957   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:14.098444   68713 cri.go:89] found id: ""
	I0815 18:40:14.098473   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.098483   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:14.098490   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:14.098553   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:14.136834   68713 cri.go:89] found id: ""
	I0815 18:40:14.136861   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.136874   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:14.136880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:14.136925   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:14.172377   68713 cri.go:89] found id: ""
	I0815 18:40:14.172400   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.172408   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:14.172415   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:14.172430   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:14.212212   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:14.212242   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:14.268412   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:14.268450   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:14.282978   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:14.283006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:14.352777   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:14.352796   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:14.352822   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:16.939906   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:16.953118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:16.953178   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:16.991697   68713 cri.go:89] found id: ""
	I0815 18:40:16.991723   68713 logs.go:276] 0 containers: []
	W0815 18:40:16.991731   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:16.991736   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:16.991801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:17.027572   68713 cri.go:89] found id: ""
	I0815 18:40:17.027602   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.027613   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:17.027623   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:17.027682   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:17.060718   68713 cri.go:89] found id: ""
	I0815 18:40:17.060750   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.060763   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:17.060771   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:17.060829   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:17.096746   68713 cri.go:89] found id: ""
	I0815 18:40:17.096771   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.096780   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:17.096786   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:17.096846   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:17.130755   68713 cri.go:89] found id: ""
	I0815 18:40:17.130791   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.130802   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:17.130810   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:17.130872   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:17.167991   68713 cri.go:89] found id: ""
	I0815 18:40:17.168016   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.168026   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:17.168034   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:17.168093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:17.200695   68713 cri.go:89] found id: ""
	I0815 18:40:17.200722   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.200733   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:17.200741   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:17.200799   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:17.237788   68713 cri.go:89] found id: ""
	I0815 18:40:17.237816   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.237824   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:17.237833   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:17.237848   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:17.288888   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:17.288921   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:17.302862   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:17.302903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:17.370062   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:17.370085   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:17.370100   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:17.444742   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:17.444781   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:16.349749   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:18.849197   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:16.155555   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:18.654875   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:17.160009   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:19.657774   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:19.984813   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:19.998010   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:19.998077   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:20.032880   68713 cri.go:89] found id: ""
	I0815 18:40:20.032903   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.032912   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:20.032918   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:20.032973   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:20.069191   68713 cri.go:89] found id: ""
	I0815 18:40:20.069224   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.069236   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:20.069243   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:20.069301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:20.101930   68713 cri.go:89] found id: ""
	I0815 18:40:20.101954   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.101962   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:20.101968   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:20.102016   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:20.136981   68713 cri.go:89] found id: ""
	I0815 18:40:20.137006   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.137014   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:20.137020   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:20.137066   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:20.174517   68713 cri.go:89] found id: ""
	I0815 18:40:20.174543   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.174550   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:20.174556   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:20.174611   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:20.208525   68713 cri.go:89] found id: ""
	I0815 18:40:20.208549   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.208559   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:20.208567   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:20.208626   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:20.240824   68713 cri.go:89] found id: ""
	I0815 18:40:20.240855   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.240867   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:20.240874   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:20.240946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:20.277683   68713 cri.go:89] found id: ""
	I0815 18:40:20.277710   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.277720   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:20.277728   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:20.277739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:20.324271   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:20.324304   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:20.376250   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:20.376285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:20.392777   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:20.392813   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:20.464122   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:20.464156   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:20.464180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:20.849461   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:22.849591   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:20.654982   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.154537   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:21.658354   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.658505   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.041684   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:23.055779   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:23.055858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:23.095391   68713 cri.go:89] found id: ""
	I0815 18:40:23.095414   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.095426   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:23.095432   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:23.095483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:23.134907   68713 cri.go:89] found id: ""
	I0815 18:40:23.134936   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.134943   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:23.134949   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:23.134994   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:23.171806   68713 cri.go:89] found id: ""
	I0815 18:40:23.171845   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.171854   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:23.171861   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:23.171924   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:23.205378   68713 cri.go:89] found id: ""
	I0815 18:40:23.205404   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.205412   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:23.205417   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:23.205467   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:23.239503   68713 cri.go:89] found id: ""
	I0815 18:40:23.239531   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.239540   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:23.239547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:23.239614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:23.275802   68713 cri.go:89] found id: ""
	I0815 18:40:23.275828   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.275842   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:23.275849   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:23.275894   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:23.310127   68713 cri.go:89] found id: ""
	I0815 18:40:23.310154   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.310167   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:23.310173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:23.310219   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:23.344646   68713 cri.go:89] found id: ""
	I0815 18:40:23.344674   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.344685   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:23.344696   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:23.344711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:23.397260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:23.397310   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:23.425518   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:23.425553   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:23.495528   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:23.495547   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:23.495562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:23.574489   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:23.574524   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.119044   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:26.133806   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:26.133880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:26.175683   68713 cri.go:89] found id: ""
	I0815 18:40:26.175711   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.175722   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:26.175730   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:26.175789   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:26.210634   68713 cri.go:89] found id: ""
	I0815 18:40:26.210658   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.210665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:26.210671   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:26.210724   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:26.244146   68713 cri.go:89] found id: ""
	I0815 18:40:26.244176   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.244187   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:26.244195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:26.244274   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:26.277312   68713 cri.go:89] found id: ""
	I0815 18:40:26.277335   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.277343   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:26.277349   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:26.277410   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:26.311538   68713 cri.go:89] found id: ""
	I0815 18:40:26.311562   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.311570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:26.311576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:26.311623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:26.347816   68713 cri.go:89] found id: ""
	I0815 18:40:26.347840   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.347847   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:26.347853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:26.347906   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:26.381211   68713 cri.go:89] found id: ""
	I0815 18:40:26.381234   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.381242   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:26.381248   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:26.381303   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:26.413982   68713 cri.go:89] found id: ""
	I0815 18:40:26.414010   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.414018   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:26.414027   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:26.414038   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:26.500686   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:26.500721   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.537615   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:26.537642   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:26.590119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:26.590150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:26.603713   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:26.603739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:26.675455   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:25.349400   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:27.853388   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:25.155463   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:27.155580   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:29.156973   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:26.158898   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:28.658576   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:29.176084   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:29.189743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:29.189813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:29.225500   68713 cri.go:89] found id: ""
	I0815 18:40:29.225536   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.225548   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:29.225557   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:29.225614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:29.261839   68713 cri.go:89] found id: ""
	I0815 18:40:29.261866   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.261877   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:29.261884   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:29.261946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:29.296685   68713 cri.go:89] found id: ""
	I0815 18:40:29.296708   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.296716   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:29.296728   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:29.296787   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:29.332524   68713 cri.go:89] found id: ""
	I0815 18:40:29.332550   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.332558   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:29.332564   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:29.332615   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:29.368918   68713 cri.go:89] found id: ""
	I0815 18:40:29.368943   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.368953   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:29.368961   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:29.369020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:29.403175   68713 cri.go:89] found id: ""
	I0815 18:40:29.403200   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.403211   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:29.403218   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:29.403279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:29.438957   68713 cri.go:89] found id: ""
	I0815 18:40:29.438981   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.438989   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:29.438994   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:29.439051   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:29.472153   68713 cri.go:89] found id: ""
	I0815 18:40:29.472184   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.472195   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:29.472206   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:29.472221   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:29.560484   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:29.560547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:29.600366   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:29.600402   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:29.656536   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:29.656569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:29.669899   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:29.669925   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:29.738515   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:32.239207   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:32.253976   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:32.254048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:32.290918   68713 cri.go:89] found id: ""
	I0815 18:40:32.290942   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.290951   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:32.290957   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:32.291009   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:32.325567   68713 cri.go:89] found id: ""
	I0815 18:40:32.325596   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.325606   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:32.325613   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:32.325674   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:32.360959   68713 cri.go:89] found id: ""
	I0815 18:40:32.360994   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.361005   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:32.361015   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:32.361090   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:32.398583   68713 cri.go:89] found id: ""
	I0815 18:40:32.398614   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.398625   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:32.398633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:32.398696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:32.432980   68713 cri.go:89] found id: ""
	I0815 18:40:32.433007   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.433017   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:32.433024   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:32.433088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:32.467645   68713 cri.go:89] found id: ""
	I0815 18:40:32.467678   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.467688   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:32.467697   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:32.467757   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:32.504233   68713 cri.go:89] found id: ""
	I0815 18:40:32.504265   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.504275   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:32.504282   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:32.504347   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:32.539127   68713 cri.go:89] found id: ""
	I0815 18:40:32.539160   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.539175   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:32.539186   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:32.539200   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:32.620782   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:32.620818   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:32.660920   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:32.660950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:32.714392   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:32.714425   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:32.727629   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:32.727655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:40:30.349267   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:32.349896   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:31.655451   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:34.154871   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:31.157219   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:33.158733   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:35.158871   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	W0815 18:40:32.801258   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.301393   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:35.315460   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:35.315515   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:35.352266   68713 cri.go:89] found id: ""
	I0815 18:40:35.352287   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.352295   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:35.352301   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:35.352345   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:35.387274   68713 cri.go:89] found id: ""
	I0815 18:40:35.387305   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.387316   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:35.387324   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:35.387386   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:35.422376   68713 cri.go:89] found id: ""
	I0815 18:40:35.422403   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.422413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:35.422419   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:35.422464   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:35.456423   68713 cri.go:89] found id: ""
	I0815 18:40:35.456452   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.456459   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:35.456465   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:35.456544   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:35.494878   68713 cri.go:89] found id: ""
	I0815 18:40:35.494903   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.494912   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:35.494919   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:35.494980   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:35.528027   68713 cri.go:89] found id: ""
	I0815 18:40:35.528051   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.528062   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:35.528069   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:35.528128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:35.568543   68713 cri.go:89] found id: ""
	I0815 18:40:35.568570   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.568580   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:35.568587   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:35.568654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:35.627717   68713 cri.go:89] found id: ""
	I0815 18:40:35.627747   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.627766   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:35.627777   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:35.627792   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:35.691497   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:35.691530   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:35.705062   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:35.705092   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:35.783785   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.783806   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:35.783819   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:35.867282   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:35.867317   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:34.848226   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:36.849242   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.850686   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:36.154981   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.155165   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:37.659017   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:40.158408   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.407940   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:38.421571   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:38.421648   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:38.456551   68713 cri.go:89] found id: ""
	I0815 18:40:38.456586   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.456597   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:38.456604   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:38.456665   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:38.494133   68713 cri.go:89] found id: ""
	I0815 18:40:38.494167   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.494179   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:38.494186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:38.494253   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:38.531566   68713 cri.go:89] found id: ""
	I0815 18:40:38.531599   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.531610   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:38.531617   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:38.531678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:38.567613   68713 cri.go:89] found id: ""
	I0815 18:40:38.567640   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.567652   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:38.567659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:38.567717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:38.603172   68713 cri.go:89] found id: ""
	I0815 18:40:38.603201   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.603212   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:38.603225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:38.603284   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:38.639600   68713 cri.go:89] found id: ""
	I0815 18:40:38.639629   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.639640   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:38.639648   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:38.639710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:38.675780   68713 cri.go:89] found id: ""
	I0815 18:40:38.675811   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.675821   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:38.675828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:38.675885   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:38.708745   68713 cri.go:89] found id: ""
	I0815 18:40:38.708775   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.708786   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:38.708796   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:38.708815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:38.722485   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:38.722514   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:38.793913   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:38.793936   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:38.793950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:38.880706   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:38.880744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:38.919505   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:38.919533   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.472452   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:41.486204   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:41.486264   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:41.520251   68713 cri.go:89] found id: ""
	I0815 18:40:41.520282   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.520294   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:41.520302   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:41.520362   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:41.561294   68713 cri.go:89] found id: ""
	I0815 18:40:41.561325   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.561336   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:41.561343   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:41.561403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:41.595290   68713 cri.go:89] found id: ""
	I0815 18:40:41.595318   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.595326   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:41.595331   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:41.595381   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:41.629706   68713 cri.go:89] found id: ""
	I0815 18:40:41.629736   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.629744   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:41.629750   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:41.629816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:41.671862   68713 cri.go:89] found id: ""
	I0815 18:40:41.671885   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.671893   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:41.671898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:41.671951   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:41.710298   68713 cri.go:89] found id: ""
	I0815 18:40:41.710349   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.710360   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:41.710368   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:41.710425   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:41.745434   68713 cri.go:89] found id: ""
	I0815 18:40:41.745472   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.745487   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:41.745492   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:41.745548   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:41.781038   68713 cri.go:89] found id: ""
	I0815 18:40:41.781073   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.781081   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:41.781088   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:41.781099   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:41.863977   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:41.864023   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:41.907477   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:41.907505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.962921   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:41.962956   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:41.976458   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:41.976505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:42.044372   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:41.349260   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:43.349615   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:40.656633   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:43.154626   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:42.658519   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:44.659640   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:44.544803   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:44.559538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:44.559595   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:44.595471   68713 cri.go:89] found id: ""
	I0815 18:40:44.595501   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.595511   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:44.595518   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:44.595581   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:44.630148   68713 cri.go:89] found id: ""
	I0815 18:40:44.630173   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.630181   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:44.630189   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:44.630245   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:44.666084   68713 cri.go:89] found id: ""
	I0815 18:40:44.666110   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.666119   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:44.666126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:44.666180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:44.700286   68713 cri.go:89] found id: ""
	I0815 18:40:44.700320   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.700331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:44.700339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:44.700394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:44.734115   68713 cri.go:89] found id: ""
	I0815 18:40:44.734143   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.734151   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:44.734157   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:44.734216   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:44.770306   68713 cri.go:89] found id: ""
	I0815 18:40:44.770363   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.770376   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:44.770383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:44.770453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:44.806766   68713 cri.go:89] found id: ""
	I0815 18:40:44.806790   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.806798   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:44.806803   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:44.806865   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:44.843574   68713 cri.go:89] found id: ""
	I0815 18:40:44.843603   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.843613   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:44.843623   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:44.843638   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:44.896119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:44.896148   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:44.909537   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:44.909562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:44.980268   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:44.980290   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:44.980307   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:45.066589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:45.066626   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:47.605934   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:47.620644   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:47.620709   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:47.660939   68713 cri.go:89] found id: ""
	I0815 18:40:47.660960   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.660967   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:47.660973   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:47.661021   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:47.701018   68713 cri.go:89] found id: ""
	I0815 18:40:47.701047   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.701059   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:47.701107   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:47.701177   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:47.739487   68713 cri.go:89] found id: ""
	I0815 18:40:47.739514   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.739523   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:47.739528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:47.739584   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:47.781483   68713 cri.go:89] found id: ""
	I0815 18:40:47.781508   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.781515   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:47.781520   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:47.781571   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:45.850565   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.851368   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:45.156177   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.654437   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.157895   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:49.658101   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.816781   68713 cri.go:89] found id: ""
	I0815 18:40:47.816806   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.816813   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:47.816819   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:47.816875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:47.853951   68713 cri.go:89] found id: ""
	I0815 18:40:47.853976   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.853984   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:47.853990   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:47.854062   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:47.892208   68713 cri.go:89] found id: ""
	I0815 18:40:47.892237   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.892246   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:47.892252   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:47.892311   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:47.926916   68713 cri.go:89] found id: ""
	I0815 18:40:47.926944   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.926965   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:47.926976   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:47.926990   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:48.002907   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:48.002927   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:48.002942   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:48.085727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:48.085762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:48.127192   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:48.127224   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:48.180172   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:48.180208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:50.694573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:50.709411   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:50.709472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:50.750956   68713 cri.go:89] found id: ""
	I0815 18:40:50.750985   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.750994   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:50.751000   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:50.751048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:50.791072   68713 cri.go:89] found id: ""
	I0815 18:40:50.791149   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.791174   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:50.791186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:50.791247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:50.827692   68713 cri.go:89] found id: ""
	I0815 18:40:50.827717   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.827728   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:50.827735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:50.827794   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:50.866587   68713 cri.go:89] found id: ""
	I0815 18:40:50.866616   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.866626   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:50.866633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:50.866692   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:50.907012   68713 cri.go:89] found id: ""
	I0815 18:40:50.907040   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.907047   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:50.907053   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:50.907101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:50.951212   68713 cri.go:89] found id: ""
	I0815 18:40:50.951243   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.951256   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:50.951263   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:50.951316   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:50.989771   68713 cri.go:89] found id: ""
	I0815 18:40:50.989802   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.989812   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:50.989818   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:50.989867   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:51.024423   68713 cri.go:89] found id: ""
	I0815 18:40:51.024454   68713 logs.go:276] 0 containers: []
	W0815 18:40:51.024465   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:51.024475   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:51.024500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:51.076973   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:51.077012   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:51.090963   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:51.090989   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:51.169981   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:51.170005   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:51.170029   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:51.248990   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:51.249040   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:50.349092   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:52.350278   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:50.154517   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:52.148131   68248 pod_ready.go:82] duration metric: took 4m0.000077937s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" ...
	E0815 18:40:52.148161   68248 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 18:40:52.148183   68248 pod_ready.go:39] duration metric: took 4m13.224994468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:40:52.148235   68248 kubeadm.go:597] duration metric: took 4m20.945128985s to restartPrimaryControlPlane
	W0815 18:40:52.148324   68248 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 18:40:52.148376   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:40:51.660289   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:54.157718   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:53.790172   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:53.803752   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:53.803816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:53.843203   68713 cri.go:89] found id: ""
	I0815 18:40:53.843231   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.843246   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:53.843254   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:53.843314   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:53.878975   68713 cri.go:89] found id: ""
	I0815 18:40:53.879000   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.879008   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:53.879013   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:53.879078   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:53.915640   68713 cri.go:89] found id: ""
	I0815 18:40:53.915668   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.915675   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:53.915683   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:53.915746   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:53.956312   68713 cri.go:89] found id: ""
	I0815 18:40:53.956340   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.956356   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:53.956365   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:53.956426   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:53.992276   68713 cri.go:89] found id: ""
	I0815 18:40:53.992304   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.992314   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:53.992322   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:53.992387   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:54.034653   68713 cri.go:89] found id: ""
	I0815 18:40:54.034682   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.034693   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:54.034701   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:54.034761   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:54.072993   68713 cri.go:89] found id: ""
	I0815 18:40:54.073018   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.073027   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:54.073038   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:54.073107   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:54.107414   68713 cri.go:89] found id: ""
	I0815 18:40:54.107446   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.107456   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:54.107466   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:54.107481   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:54.145900   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:54.145928   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:54.197609   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:54.197639   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:54.211384   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:54.211410   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:54.280991   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:54.281018   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:54.281031   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:56.868270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:56.881168   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:56.881248   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:56.915206   68713 cri.go:89] found id: ""
	I0815 18:40:56.915235   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.915243   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:56.915249   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:56.915308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:56.950838   68713 cri.go:89] found id: ""
	I0815 18:40:56.950864   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.950873   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:56.950879   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:56.950937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:56.993625   68713 cri.go:89] found id: ""
	I0815 18:40:56.993649   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.993656   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:56.993662   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:56.993718   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:57.029109   68713 cri.go:89] found id: ""
	I0815 18:40:57.029139   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.029150   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:57.029158   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:57.029213   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:57.063480   68713 cri.go:89] found id: ""
	I0815 18:40:57.063518   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.063530   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:57.063538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:57.063598   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:57.102830   68713 cri.go:89] found id: ""
	I0815 18:40:57.102859   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.102870   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:57.102877   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:57.102938   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:57.137116   68713 cri.go:89] found id: ""
	I0815 18:40:57.137146   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.137159   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:57.137173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:57.137235   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:57.174678   68713 cri.go:89] found id: ""
	I0815 18:40:57.174706   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.174717   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:57.174727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:57.174741   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:57.213270   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:57.213311   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:57.269463   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:57.269500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:57.283891   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:57.283915   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:57.355563   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:57.355589   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:57.355601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:54.849266   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:57.350343   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:56.657843   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:58.658098   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:59.943493   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:59.957225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:59.957285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:59.993113   68713 cri.go:89] found id: ""
	I0815 18:40:59.993142   68713 logs.go:276] 0 containers: []
	W0815 18:40:59.993153   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:59.993167   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:59.993228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:00.033485   68713 cri.go:89] found id: ""
	I0815 18:41:00.033515   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.033525   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:00.033533   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:00.033594   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:00.070808   68713 cri.go:89] found id: ""
	I0815 18:41:00.070830   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.070838   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:00.070844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:00.070893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:00.113043   68713 cri.go:89] found id: ""
	I0815 18:41:00.113067   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.113076   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:00.113082   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:00.113139   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:00.148089   68713 cri.go:89] found id: ""
	I0815 18:41:00.148118   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.148129   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:00.148136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:00.148206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:00.188343   68713 cri.go:89] found id: ""
	I0815 18:41:00.188375   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.188386   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:00.188394   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:00.188448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:00.224287   68713 cri.go:89] found id: ""
	I0815 18:41:00.224312   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.224323   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:00.224337   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:00.224398   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:00.263983   68713 cri.go:89] found id: ""
	I0815 18:41:00.264008   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.264016   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:00.264025   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:00.264037   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:00.278057   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:00.278083   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:00.355112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:00.355133   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:00.355146   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:00.436636   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:00.436672   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:00.474774   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:00.474801   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:59.849797   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:02.349363   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:01.158004   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:03.158380   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:05.658860   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:03.027434   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:03.041422   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:03.041496   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:03.074093   68713 cri.go:89] found id: ""
	I0815 18:41:03.074119   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.074130   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:03.074138   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:03.074198   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:03.111489   68713 cri.go:89] found id: ""
	I0815 18:41:03.111517   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.111529   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:03.111537   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:03.111599   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:03.147716   68713 cri.go:89] found id: ""
	I0815 18:41:03.147747   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.147756   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:03.147762   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:03.147825   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:03.184609   68713 cri.go:89] found id: ""
	I0815 18:41:03.184635   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.184644   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:03.184652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:03.184710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:03.221839   68713 cri.go:89] found id: ""
	I0815 18:41:03.221869   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.221878   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:03.221883   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:03.221935   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:03.262619   68713 cri.go:89] found id: ""
	I0815 18:41:03.262649   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.262661   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:03.262669   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:03.262733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:03.297826   68713 cri.go:89] found id: ""
	I0815 18:41:03.297849   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.297864   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:03.297875   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:03.297922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:03.345046   68713 cri.go:89] found id: ""
	I0815 18:41:03.345074   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.345083   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:03.345095   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:03.345133   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:03.416878   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:03.416905   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:03.416920   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:03.491548   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:03.491583   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:03.533821   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:03.533852   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:03.587749   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:03.587787   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.104002   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:06.118123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:06.118195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:06.156179   68713 cri.go:89] found id: ""
	I0815 18:41:06.156204   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.156213   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:06.156218   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:06.156275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:06.192834   68713 cri.go:89] found id: ""
	I0815 18:41:06.192858   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.192866   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:06.192871   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:06.192918   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:06.228355   68713 cri.go:89] found id: ""
	I0815 18:41:06.228379   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.228387   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:06.228393   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:06.228453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:06.262041   68713 cri.go:89] found id: ""
	I0815 18:41:06.262068   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.262079   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:06.262086   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:06.262152   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:06.303217   68713 cri.go:89] found id: ""
	I0815 18:41:06.303249   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.303261   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:06.303268   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:06.303335   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:06.337180   68713 cri.go:89] found id: ""
	I0815 18:41:06.337208   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.337215   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:06.337222   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:06.337270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:06.375054   68713 cri.go:89] found id: ""
	I0815 18:41:06.375081   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.375088   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:06.375095   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:06.375163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:06.412188   68713 cri.go:89] found id: ""
	I0815 18:41:06.412216   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.412227   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:06.412239   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:06.412255   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.425607   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:06.425633   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:06.500853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:06.500872   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:06.500883   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:06.577297   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:06.577333   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:06.620209   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:06.620239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:04.848677   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:06.849254   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:08.849300   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:08.157734   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:10.157969   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:09.171606   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:09.184197   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:09.184257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:09.217865   68713 cri.go:89] found id: ""
	I0815 18:41:09.217893   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.217904   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:09.217912   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:09.217967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:09.254032   68713 cri.go:89] found id: ""
	I0815 18:41:09.254055   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.254064   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:09.254073   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:09.254128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:09.291772   68713 cri.go:89] found id: ""
	I0815 18:41:09.291798   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.291808   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:09.291816   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:09.291880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:09.326695   68713 cri.go:89] found id: ""
	I0815 18:41:09.326717   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.326726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:09.326731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:09.326791   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:09.365779   68713 cri.go:89] found id: ""
	I0815 18:41:09.365807   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.365818   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:09.365825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:09.365880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:09.413475   68713 cri.go:89] found id: ""
	I0815 18:41:09.413500   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.413509   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:09.413514   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:09.413578   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:09.449483   68713 cri.go:89] found id: ""
	I0815 18:41:09.449511   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.449521   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:09.449528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:09.449623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:09.487484   68713 cri.go:89] found id: ""
	I0815 18:41:09.487513   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.487525   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:09.487535   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:09.487549   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:09.536746   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:09.536777   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:09.549912   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:09.549944   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:09.619192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:09.619227   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:09.619246   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:09.698370   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:09.698404   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:12.240745   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:12.254814   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:12.254875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:12.291346   68713 cri.go:89] found id: ""
	I0815 18:41:12.291376   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.291387   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:12.291395   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:12.291456   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:12.324832   68713 cri.go:89] found id: ""
	I0815 18:41:12.324867   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.324878   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:12.324886   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:12.324950   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:12.360172   68713 cri.go:89] found id: ""
	I0815 18:41:12.360193   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.360201   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:12.360206   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:12.360251   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:12.394671   68713 cri.go:89] found id: ""
	I0815 18:41:12.394700   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.394710   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:12.394731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:12.394800   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:12.428951   68713 cri.go:89] found id: ""
	I0815 18:41:12.428999   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.429007   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:12.429013   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:12.429057   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:12.466035   68713 cri.go:89] found id: ""
	I0815 18:41:12.466061   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.466069   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:12.466075   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:12.466125   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:12.500003   68713 cri.go:89] found id: ""
	I0815 18:41:12.500031   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.500042   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:12.500050   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:12.500105   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:12.537433   68713 cri.go:89] found id: ""
	I0815 18:41:12.537457   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.537464   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:12.537473   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:12.537484   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:12.586768   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:12.586809   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:12.600549   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:12.600578   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:12.673112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:12.673138   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:12.673154   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:12.754689   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:12.754726   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:11.348767   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:13.349973   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:12.158249   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:13.158354   68429 pod_ready.go:82] duration metric: took 4m0.006607137s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	E0815 18:41:13.158373   68429 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 18:41:13.158381   68429 pod_ready.go:39] duration metric: took 4m7.064501997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:13.158395   68429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:13.158423   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:13.158467   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:13.203746   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:13.203771   68429 cri.go:89] found id: ""
	I0815 18:41:13.203779   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:13.203840   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.208188   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:13.208248   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:13.245326   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:13.245351   68429 cri.go:89] found id: ""
	I0815 18:41:13.245359   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:13.245412   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.250212   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:13.250281   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:13.296537   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:13.296565   68429 cri.go:89] found id: ""
	I0815 18:41:13.296576   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:13.296634   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.300823   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:13.300881   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:13.337973   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:13.338018   68429 cri.go:89] found id: ""
	I0815 18:41:13.338031   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:13.338083   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.342251   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:13.342307   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:13.379921   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:13.379948   68429 cri.go:89] found id: ""
	I0815 18:41:13.379957   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:13.380005   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.384451   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:13.384539   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:13.421077   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:13.421113   68429 cri.go:89] found id: ""
	I0815 18:41:13.421122   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:13.421180   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.425566   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:13.425640   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:13.468663   68429 cri.go:89] found id: ""
	I0815 18:41:13.468688   68429 logs.go:276] 0 containers: []
	W0815 18:41:13.468696   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:13.468701   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:13.468753   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:13.506689   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:13.506711   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:13.506715   68429 cri.go:89] found id: ""
	I0815 18:41:13.506723   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:13.506784   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.511177   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.515519   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:13.515543   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:13.583771   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:13.583806   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:13.714906   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:13.714945   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:13.766512   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:13.766548   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:13.818416   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:13.818450   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:13.859035   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:13.859073   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:13.901515   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:13.901546   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:14.437262   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:14.437304   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:14.453511   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:14.453551   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:14.489238   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:14.489267   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:14.540141   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:14.540184   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:14.574758   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:14.574785   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:14.609370   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:14.609398   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:15.294667   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:15.307758   68713 kubeadm.go:597] duration metric: took 4m2.67500099s to restartPrimaryControlPlane
	W0815 18:41:15.307840   68713 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 18:41:15.307872   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:41:15.761255   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:15.776049   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:41:15.786643   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:41:15.796517   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:41:15.796537   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:41:15.796585   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:41:15.806118   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:41:15.806167   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:41:15.816363   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:41:15.826396   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:41:15.826449   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:41:15.836538   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.847035   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:41:15.847093   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.857475   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:41:15.867084   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:41:15.867144   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:41:15.879736   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:41:15.954497   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:41:15.954588   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:41:16.098128   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:41:16.098244   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:41:16.098345   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:41:16.288507   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:41:16.290439   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:41:16.290555   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:41:16.290656   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:41:16.290756   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:41:16.290831   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:41:16.290923   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:41:16.291003   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:41:16.291096   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:41:16.291182   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:41:16.291280   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:41:16.291396   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:41:16.291457   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:41:16.291509   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:41:16.363570   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:41:16.549782   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:41:16.789250   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:41:16.983388   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:41:17.004293   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:41:17.006438   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:41:17.006485   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:41:17.154583   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:41:17.156594   68713 out.go:235]   - Booting up control plane ...
	I0815 18:41:17.156717   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:41:17.177351   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:41:17.179286   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:41:17.180313   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:41:17.183829   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:41:15.850424   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:18.348986   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:18.430273   68248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.281857018s)
	I0815 18:41:18.430359   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:18.445633   68248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:41:18.457459   68248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:41:18.469748   68248 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:41:18.469769   68248 kubeadm.go:157] found existing configuration files:
	
	I0815 18:41:18.469818   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:41:18.480099   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:41:18.480146   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:41:18.491871   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:41:18.501274   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:41:18.501339   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:41:18.510186   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:41:18.518803   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:41:18.518863   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:41:18.527843   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:41:18.536437   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:41:18.536514   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:41:18.545573   68248 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:41:18.596478   68248 kubeadm.go:310] W0815 18:41:18.577134    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:41:18.597311   68248 kubeadm.go:310] W0815 18:41:18.577958    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:41:18.709937   68248 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:41:17.151343   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:17.173653   68429 api_server.go:72] duration metric: took 4m18.293407117s to wait for apiserver process to appear ...
	I0815 18:41:17.173677   68429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:41:17.173724   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:17.173784   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:17.211484   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:17.211509   68429 cri.go:89] found id: ""
	I0815 18:41:17.211518   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:17.211583   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.216011   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:17.216107   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:17.265454   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:17.265486   68429 cri.go:89] found id: ""
	I0815 18:41:17.265497   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:17.265554   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.269804   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:17.269868   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:17.310339   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:17.310363   68429 cri.go:89] found id: ""
	I0815 18:41:17.310371   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:17.310435   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.315639   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:17.315695   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:17.352364   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:17.352387   68429 cri.go:89] found id: ""
	I0815 18:41:17.352396   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:17.352452   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.356782   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:17.356848   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:17.396704   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:17.396733   68429 cri.go:89] found id: ""
	I0815 18:41:17.396744   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:17.396799   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.400920   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:17.400985   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:17.440361   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:17.440390   68429 cri.go:89] found id: ""
	I0815 18:41:17.440400   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:17.440464   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.445057   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:17.445127   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:17.487341   68429 cri.go:89] found id: ""
	I0815 18:41:17.487369   68429 logs.go:276] 0 containers: []
	W0815 18:41:17.487380   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:17.487388   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:17.487446   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:17.528197   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:17.528218   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:17.528223   68429 cri.go:89] found id: ""
	I0815 18:41:17.528229   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:17.528285   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.532536   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.536745   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:17.536768   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:17.574236   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:17.574268   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:17.617822   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:17.617853   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:17.673009   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:17.673037   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:17.717620   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:17.717647   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:17.764641   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:17.764671   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:17.815586   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:17.815618   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:17.855287   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:17.855310   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:17.906486   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:17.906514   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:17.941540   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:17.941562   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:18.373461   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:18.373497   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:18.454203   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:18.454244   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:18.470284   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:18.470315   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:20.349635   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:22.350034   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:21.080947   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:41:21.085334   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0815 18:41:21.086420   68429 api_server.go:141] control plane version: v1.31.0
	I0815 18:41:21.086442   68429 api_server.go:131] duration metric: took 3.912756949s to wait for apiserver health ...
	I0815 18:41:21.086452   68429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:41:21.086481   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:21.086537   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:21.124183   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:21.124210   68429 cri.go:89] found id: ""
	I0815 18:41:21.124218   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:21.124285   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.128402   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:21.128472   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:21.164737   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:21.164768   68429 cri.go:89] found id: ""
	I0815 18:41:21.164779   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:21.164835   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.170622   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:21.170699   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:21.206823   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:21.206847   68429 cri.go:89] found id: ""
	I0815 18:41:21.206855   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:21.206910   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.211055   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:21.211128   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:21.255529   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:21.255555   68429 cri.go:89] found id: ""
	I0815 18:41:21.255565   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:21.255629   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.260062   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:21.260139   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:21.298058   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:21.298116   68429 cri.go:89] found id: ""
	I0815 18:41:21.298124   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:21.298180   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.302821   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:21.302892   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:21.340895   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:21.340925   68429 cri.go:89] found id: ""
	I0815 18:41:21.340936   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:21.341003   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.345545   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:21.345638   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:21.383180   68429 cri.go:89] found id: ""
	I0815 18:41:21.383212   68429 logs.go:276] 0 containers: []
	W0815 18:41:21.383223   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:21.383232   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:21.383301   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:21.421152   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:21.421178   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:21.421185   68429 cri.go:89] found id: ""
	I0815 18:41:21.421198   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:21.421257   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.426326   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.430307   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:21.430351   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:21.562655   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:21.562697   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:21.613436   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:21.613470   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:21.674678   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:21.674721   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:21.717283   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:21.717316   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:21.760218   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:21.760249   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:21.802313   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:21.802352   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:21.874565   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:21.874608   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:21.891629   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:21.891666   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:21.934128   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:21.934170   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:21.985467   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:21.985497   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:22.023731   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:22.023770   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:22.403584   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:22.403626   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:25.005734   68429 system_pods.go:59] 8 kube-system pods found
	I0815 18:41:25.005760   68429 system_pods.go:61] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running
	I0815 18:41:25.005766   68429 system_pods.go:61] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running
	I0815 18:41:25.005770   68429 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running
	I0815 18:41:25.005775   68429 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running
	I0815 18:41:25.005778   68429 system_pods.go:61] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:41:25.005781   68429 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running
	I0815 18:41:25.005788   68429 system_pods.go:61] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:25.005793   68429 system_pods.go:61] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:41:25.005799   68429 system_pods.go:74] duration metric: took 3.919341536s to wait for pod list to return data ...
	I0815 18:41:25.005806   68429 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:41:25.008398   68429 default_sa.go:45] found service account: "default"
	I0815 18:41:25.008419   68429 default_sa.go:55] duration metric: took 2.608281ms for default service account to be created ...
	I0815 18:41:25.008427   68429 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:41:25.012784   68429 system_pods.go:86] 8 kube-system pods found
	I0815 18:41:25.012804   68429 system_pods.go:89] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running
	I0815 18:41:25.012810   68429 system_pods.go:89] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running
	I0815 18:41:25.012817   68429 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running
	I0815 18:41:25.012821   68429 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running
	I0815 18:41:25.012825   68429 system_pods.go:89] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:41:25.012828   68429 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running
	I0815 18:41:25.012834   68429 system_pods.go:89] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:25.012838   68429 system_pods.go:89] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:41:25.012850   68429 system_pods.go:126] duration metric: took 4.415694ms to wait for k8s-apps to be running ...
	I0815 18:41:25.012858   68429 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:41:25.012905   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:25.028245   68429 system_svc.go:56] duration metric: took 15.378403ms WaitForService to wait for kubelet
	I0815 18:41:25.028272   68429 kubeadm.go:582] duration metric: took 4m26.148030358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:41:25.028290   68429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:41:25.030696   68429 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:41:25.030717   68429 node_conditions.go:123] node cpu capacity is 2
	I0815 18:41:25.030728   68429 node_conditions.go:105] duration metric: took 2.43352ms to run NodePressure ...
	I0815 18:41:25.030742   68429 start.go:241] waiting for startup goroutines ...
	I0815 18:41:25.030751   68429 start.go:246] waiting for cluster config update ...
	I0815 18:41:25.030768   68429 start.go:255] writing updated cluster config ...
	I0815 18:41:25.031028   68429 ssh_runner.go:195] Run: rm -f paused
	I0815 18:41:25.077910   68429 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:41:25.079973   68429 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-423062" cluster and "default" namespace by default
	I0815 18:41:27.911884   68248 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 18:41:27.911943   68248 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:41:27.912011   68248 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:41:27.912130   68248 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:41:27.912272   68248 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 18:41:27.912359   68248 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:41:27.913884   68248 out.go:235]   - Generating certificates and keys ...
	I0815 18:41:27.913990   68248 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:41:27.914092   68248 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:41:27.914197   68248 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:41:27.914289   68248 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:41:27.914362   68248 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:41:27.914433   68248 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:41:27.914521   68248 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:41:27.914606   68248 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:41:27.914859   68248 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:41:27.914984   68248 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:41:27.915040   68248 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:41:27.915119   68248 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:41:27.915190   68248 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:41:27.915268   68248 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 18:41:27.915336   68248 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:41:27.915419   68248 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:41:27.915500   68248 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:41:27.915606   68248 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:41:27.915691   68248 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:41:27.917229   68248 out.go:235]   - Booting up control plane ...
	I0815 18:41:27.917311   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:41:27.917377   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:41:27.917433   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:41:27.917521   68248 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:41:27.917590   68248 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:41:27.917623   68248 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:41:27.917740   68248 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 18:41:27.917829   68248 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 18:41:27.917880   68248 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00200618s
	I0815 18:41:27.917954   68248 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 18:41:27.918011   68248 kubeadm.go:310] [api-check] The API server is healthy after 5.501605719s
	I0815 18:41:27.918122   68248 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 18:41:27.918268   68248 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 18:41:27.918361   68248 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 18:41:27.918626   68248 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-555028 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 18:41:27.918723   68248 kubeadm.go:310] [bootstrap-token] Using token: 99xu37.bm6hiisu91f6rbvd
	I0815 18:41:27.920248   68248 out.go:235]   - Configuring RBAC rules ...
	I0815 18:41:27.920360   68248 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 18:41:27.920467   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 18:41:27.920651   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 18:41:27.920785   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 18:41:27.920938   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 18:41:27.921052   68248 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 18:41:27.921225   68248 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 18:41:27.921286   68248 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 18:41:27.921356   68248 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 18:41:27.921369   68248 kubeadm.go:310] 
	I0815 18:41:27.921422   68248 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 18:41:27.921428   68248 kubeadm.go:310] 
	I0815 18:41:27.921488   68248 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 18:41:27.921497   68248 kubeadm.go:310] 
	I0815 18:41:27.921521   68248 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 18:41:27.921570   68248 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 18:41:27.921619   68248 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 18:41:27.921625   68248 kubeadm.go:310] 
	I0815 18:41:27.921698   68248 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 18:41:27.921711   68248 kubeadm.go:310] 
	I0815 18:41:27.921776   68248 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 18:41:27.921787   68248 kubeadm.go:310] 
	I0815 18:41:27.921858   68248 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 18:41:27.921963   68248 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 18:41:27.922055   68248 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 18:41:27.922064   68248 kubeadm.go:310] 
	I0815 18:41:27.922166   68248 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 18:41:27.922281   68248 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 18:41:27.922306   68248 kubeadm.go:310] 
	I0815 18:41:27.922413   68248 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 99xu37.bm6hiisu91f6rbvd \
	I0815 18:41:27.922550   68248 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 \
	I0815 18:41:27.922593   68248 kubeadm.go:310] 	--control-plane 
	I0815 18:41:27.922603   68248 kubeadm.go:310] 
	I0815 18:41:27.922703   68248 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 18:41:27.922712   68248 kubeadm.go:310] 
	I0815 18:41:27.922800   68248 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 99xu37.bm6hiisu91f6rbvd \
	I0815 18:41:27.922901   68248 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 
	I0815 18:41:27.922909   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:41:27.922916   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:41:27.924596   68248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:41:24.849483   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:27.350715   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:27.926142   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:41:27.938307   68248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:41:27.958862   68248 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:41:27.958974   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:27.959032   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-555028 minikube.k8s.io/updated_at=2024_08_15T18_41_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=embed-certs-555028 minikube.k8s.io/primary=true
	I0815 18:41:28.156844   68248 ops.go:34] apiserver oom_adj: -16
	I0815 18:41:28.157122   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:28.657735   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:29.157713   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:29.658109   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:30.157486   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:30.657573   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.157463   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.658073   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.757929   68248 kubeadm.go:1113] duration metric: took 3.799012728s to wait for elevateKubeSystemPrivileges
	I0815 18:41:31.757969   68248 kubeadm.go:394] duration metric: took 5m0.607372858s to StartCluster
	I0815 18:41:31.757992   68248 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:41:31.758070   68248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:41:31.759686   68248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:41:31.759915   68248 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:41:31.759982   68248 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:41:31.760072   68248 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-555028"
	I0815 18:41:31.760090   68248 addons.go:69] Setting default-storageclass=true in profile "embed-certs-555028"
	I0815 18:41:31.760115   68248 addons.go:69] Setting metrics-server=true in profile "embed-certs-555028"
	I0815 18:41:31.760133   68248 addons.go:234] Setting addon metrics-server=true in "embed-certs-555028"
	W0815 18:41:31.760141   68248 addons.go:243] addon metrics-server should already be in state true
	I0815 18:41:31.760148   68248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-555028"
	I0815 18:41:31.760174   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.760110   68248 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-555028"
	W0815 18:41:31.760230   68248 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:41:31.760270   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.760108   68248 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:41:31.760603   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760619   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760637   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.760642   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.760658   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760708   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.761566   68248 out.go:177] * Verifying Kubernetes components...
	I0815 18:41:31.762780   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:41:31.777893   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I0815 18:41:31.778444   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.779021   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.779049   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.779496   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.780129   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.780182   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.780954   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0815 18:41:31.781146   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0815 18:41:31.781506   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.781586   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.782056   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.782061   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.782078   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.782079   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.782437   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.782494   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.782685   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.783004   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.783034   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.786246   68248 addons.go:234] Setting addon default-storageclass=true in "embed-certs-555028"
	W0815 18:41:31.786270   68248 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:41:31.786300   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.786682   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.786714   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.800152   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36619
	I0815 18:41:31.800729   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.801272   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.801295   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.801656   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.801835   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.803539   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0815 18:41:31.803751   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.804058   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.804640   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.804660   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.805007   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.805157   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.806098   68248 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:41:31.806397   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0815 18:41:31.806814   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.807269   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.807450   68248 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:41:31.807466   68248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:41:31.807484   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.807744   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.807757   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.808066   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.808889   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.808923   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.809143   68248 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:41:31.810575   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:41:31.810593   68248 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:41:31.810619   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.810648   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.811760   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.811761   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.811802   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.811948   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.812101   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.812243   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.814211   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.814653   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.814675   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.814953   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.815117   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.815271   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.815391   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.829657   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
	I0815 18:41:31.830122   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.830710   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.830734   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.831077   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.831291   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.833016   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.833271   68248 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:41:31.833285   68248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:41:31.833302   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.836248   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.836655   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.836682   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.836908   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.837097   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.837233   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.837410   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.988466   68248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:41:32.010147   68248 node_ready.go:35] waiting up to 6m0s for node "embed-certs-555028" to be "Ready" ...
	I0815 18:41:32.019505   68248 node_ready.go:49] node "embed-certs-555028" has status "Ready":"True"
	I0815 18:41:32.019529   68248 node_ready.go:38] duration metric: took 9.346825ms for node "embed-certs-555028" to be "Ready" ...
	I0815 18:41:32.019541   68248 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:32.032036   68248 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:32.125991   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:41:32.138532   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:41:32.138554   68248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:41:32.155222   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:41:32.196478   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:41:32.196517   68248 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:41:32.270461   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:41:32.270495   68248 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:41:32.405567   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:41:33.205712   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.050454437s)
	I0815 18:41:33.205772   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.205785   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.205793   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.079759984s)
	I0815 18:41:33.205826   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.205838   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206153   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.206169   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206184   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206194   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206200   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.206205   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206210   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206218   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.206202   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.206226   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206415   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206421   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206430   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206432   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.245033   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.245061   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.245328   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.245343   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.651886   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246273862s)
	I0815 18:41:33.651945   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.651960   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.652264   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.652307   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.652311   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.652326   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.652335   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.652618   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.652640   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.652650   68248 addons.go:475] Verifying addon metrics-server=true in "embed-certs-555028"
	I0815 18:41:33.652697   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.654487   68248 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 18:41:29.848462   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:31.850877   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:33.655868   68248 addons.go:510] duration metric: took 1.89588756s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 18:41:34.044605   68248 pod_ready.go:103] pod "etcd-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:34.538170   68248 pod_ready.go:93] pod "etcd-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.538199   68248 pod_ready.go:82] duration metric: took 2.506135047s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.538212   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.543160   68248 pod_ready.go:93] pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.543182   68248 pod_ready.go:82] duration metric: took 4.962289ms for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.543195   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.547126   68248 pod_ready.go:93] pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.547144   68248 pod_ready.go:82] duration metric: took 3.94279ms for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.547152   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:36.553459   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:37.555276   68248 pod_ready.go:93] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:37.555299   68248 pod_ready.go:82] duration metric: took 3.008140869s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:37.555307   68248 pod_ready.go:39] duration metric: took 5.535754922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:37.555330   68248 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:37.555378   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:37.575318   68248 api_server.go:72] duration metric: took 5.815371975s to wait for apiserver process to appear ...
	I0815 18:41:37.575344   68248 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:41:37.575361   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:41:37.580989   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I0815 18:41:37.582142   68248 api_server.go:141] control plane version: v1.31.0
	I0815 18:41:37.582164   68248 api_server.go:131] duration metric: took 6.812732ms to wait for apiserver health ...
	I0815 18:41:37.582174   68248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:41:37.589334   68248 system_pods.go:59] 9 kube-system pods found
	I0815 18:41:37.589366   68248 system_pods.go:61] "coredns-6f6b679f8f-mf6q4" [a5f7f959-715b-48a1-9f85-f267614182f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:41:37.589377   68248 system_pods.go:61] "coredns-6f6b679f8f-rc947" [3d041322-9d6b-4f46-8f58-e2991f34a297] Running
	I0815 18:41:37.589385   68248 system_pods.go:61] "etcd-embed-certs-555028" [8b533be4-dc0d-4b5e-af13-4efde0ddca33] Running
	I0815 18:41:37.589390   68248 system_pods.go:61] "kube-apiserver-embed-certs-555028" [6cbda2fc-5bf8-42d3-acee-fbf45de39d08] Running
	I0815 18:41:37.589397   68248 system_pods.go:61] "kube-controller-manager-embed-certs-555028" [e1246479-31dd-4437-b32f-4709fa627284] Running
	I0815 18:41:37.589403   68248 system_pods.go:61] "kube-proxy-ktczt" [f5e5b692-edd5-48fd-879b-7b8da4dea9fd] Running
	I0815 18:41:37.589410   68248 system_pods.go:61] "kube-scheduler-embed-certs-555028" [046100d7-8f69-4bff-8d48-c088c27b7601] Running
	I0815 18:41:37.589422   68248 system_pods.go:61] "metrics-server-6867b74b74-zkpx5" [92e18af9-7bd1-4891-b551-06ba4b293560] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:37.589431   68248 system_pods.go:61] "storage-provisioner" [d6979830-492e-4ef7-960f-2d4756de1c8f] Running
	I0815 18:41:37.589439   68248 system_pods.go:74] duration metric: took 7.257758ms to wait for pod list to return data ...
	I0815 18:41:37.589450   68248 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:41:37.592468   68248 default_sa.go:45] found service account: "default"
	I0815 18:41:37.592500   68248 default_sa.go:55] duration metric: took 3.029278ms for default service account to be created ...
	I0815 18:41:37.592511   68248 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:41:37.597697   68248 system_pods.go:86] 9 kube-system pods found
	I0815 18:41:37.597725   68248 system_pods.go:89] "coredns-6f6b679f8f-mf6q4" [a5f7f959-715b-48a1-9f85-f267614182f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:41:37.597730   68248 system_pods.go:89] "coredns-6f6b679f8f-rc947" [3d041322-9d6b-4f46-8f58-e2991f34a297] Running
	I0815 18:41:37.597736   68248 system_pods.go:89] "etcd-embed-certs-555028" [8b533be4-dc0d-4b5e-af13-4efde0ddca33] Running
	I0815 18:41:37.597740   68248 system_pods.go:89] "kube-apiserver-embed-certs-555028" [6cbda2fc-5bf8-42d3-acee-fbf45de39d08] Running
	I0815 18:41:37.597744   68248 system_pods.go:89] "kube-controller-manager-embed-certs-555028" [e1246479-31dd-4437-b32f-4709fa627284] Running
	I0815 18:41:37.597747   68248 system_pods.go:89] "kube-proxy-ktczt" [f5e5b692-edd5-48fd-879b-7b8da4dea9fd] Running
	I0815 18:41:37.597751   68248 system_pods.go:89] "kube-scheduler-embed-certs-555028" [046100d7-8f69-4bff-8d48-c088c27b7601] Running
	I0815 18:41:37.597756   68248 system_pods.go:89] "metrics-server-6867b74b74-zkpx5" [92e18af9-7bd1-4891-b551-06ba4b293560] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:37.597763   68248 system_pods.go:89] "storage-provisioner" [d6979830-492e-4ef7-960f-2d4756de1c8f] Running
	I0815 18:41:37.597769   68248 system_pods.go:126] duration metric: took 5.252997ms to wait for k8s-apps to be running ...
	I0815 18:41:37.597779   68248 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:41:37.597819   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:37.616004   68248 system_svc.go:56] duration metric: took 18.217091ms WaitForService to wait for kubelet
	I0815 18:41:37.616032   68248 kubeadm.go:582] duration metric: took 5.856091444s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:41:37.616049   68248 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:41:37.619195   68248 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:41:37.619215   68248 node_conditions.go:123] node cpu capacity is 2
	I0815 18:41:37.619223   68248 node_conditions.go:105] duration metric: took 3.169759ms to run NodePressure ...
	I0815 18:41:37.619234   68248 start.go:241] waiting for startup goroutines ...
	I0815 18:41:37.619242   68248 start.go:246] waiting for cluster config update ...
	I0815 18:41:37.619253   68248 start.go:255] writing updated cluster config ...
	I0815 18:41:37.619520   68248 ssh_runner.go:195] Run: rm -f paused
	I0815 18:41:37.669469   68248 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:41:37.671485   68248 out.go:177] * Done! kubectl is now configured to use "embed-certs-555028" cluster and "default" namespace by default
	I0815 18:41:34.350702   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:36.849248   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:39.348684   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:41.349379   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:43.848932   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:46.348801   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:48.349736   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:50.848728   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:52.850583   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:57.184855   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:41:57.185437   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:41:57.185667   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:41:54.851200   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:57.349542   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:42:02.186077   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:02.186272   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:41:59.349724   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:59.349748   67936 pod_ready.go:82] duration metric: took 4m0.007281981s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	E0815 18:41:59.349757   67936 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 18:41:59.349763   67936 pod_ready.go:39] duration metric: took 4m1.606987494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:59.349779   67936 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:59.349802   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:59.349844   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:59.395509   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:41:59.395541   67936 cri.go:89] found id: ""
	I0815 18:41:59.395552   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:41:59.395608   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.400063   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:59.400140   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:59.435356   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:41:59.435379   67936 cri.go:89] found id: ""
	I0815 18:41:59.435386   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:41:59.435431   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.440159   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:59.440213   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:59.479810   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:41:59.479841   67936 cri.go:89] found id: ""
	I0815 18:41:59.479851   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:41:59.479907   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.484341   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:59.484394   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:59.521077   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:41:59.521104   67936 cri.go:89] found id: ""
	I0815 18:41:59.521114   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:41:59.521168   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.525075   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:59.525131   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:59.564058   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:41:59.564084   67936 cri.go:89] found id: ""
	I0815 18:41:59.564093   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:41:59.564150   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.568668   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:59.568734   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:59.604385   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:41:59.604406   67936 cri.go:89] found id: ""
	I0815 18:41:59.604416   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:41:59.604473   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.609023   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:59.609095   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:59.646289   67936 cri.go:89] found id: ""
	I0815 18:41:59.646334   67936 logs.go:276] 0 containers: []
	W0815 18:41:59.646346   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:59.646355   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:59.646422   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:59.681861   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:41:59.681889   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:41:59.681895   67936 cri.go:89] found id: ""
	I0815 18:41:59.681903   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:41:59.681951   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.686379   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.690328   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:59.690353   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:59.759302   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:41:59.759338   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:41:59.798249   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:41:59.798276   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:41:59.834097   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:41:59.834129   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:41:59.885365   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:41:59.885398   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:41:59.923013   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:59.923038   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:59.938162   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:59.938192   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:00.077340   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:00.077377   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:00.122292   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:00.122323   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:00.165209   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:00.165235   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:00.201278   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:00.201317   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:00.238007   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:00.238042   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:00.709997   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:00.710043   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:03.252351   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:42:03.268074   67936 api_server.go:72] duration metric: took 4m12.770065297s to wait for apiserver process to appear ...
	I0815 18:42:03.268104   67936 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:42:03.268159   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:42:03.268227   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:42:03.305890   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:03.305913   67936 cri.go:89] found id: ""
	I0815 18:42:03.305923   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:42:03.305981   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.309958   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:42:03.310019   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:42:03.344602   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:03.344630   67936 cri.go:89] found id: ""
	I0815 18:42:03.344639   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:42:03.344696   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.349261   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:42:03.349317   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:42:03.383892   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:03.383912   67936 cri.go:89] found id: ""
	I0815 18:42:03.383919   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:42:03.383968   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.388158   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:42:03.388219   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:42:03.423264   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:03.423293   67936 cri.go:89] found id: ""
	I0815 18:42:03.423303   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:42:03.423352   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.427436   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:42:03.427496   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:42:03.470792   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:03.470819   67936 cri.go:89] found id: ""
	I0815 18:42:03.470829   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:42:03.470890   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.475884   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:42:03.475956   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:42:03.513081   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:03.513103   67936 cri.go:89] found id: ""
	I0815 18:42:03.513110   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:42:03.513161   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.517913   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:42:03.517985   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:42:03.556149   67936 cri.go:89] found id: ""
	I0815 18:42:03.556180   67936 logs.go:276] 0 containers: []
	W0815 18:42:03.556191   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:42:03.556199   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:42:03.556257   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:42:03.595987   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:03.596015   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:03.596021   67936 cri.go:89] found id: ""
	I0815 18:42:03.596030   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:42:03.596112   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.600430   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.604422   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:42:03.604443   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:42:03.676629   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:42:03.676665   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:03.717487   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:03.717514   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:03.755606   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:42:03.755632   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:03.815152   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:03.815187   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:03.857853   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:03.857882   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:04.296939   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:42:04.296983   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:42:04.312346   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:42:04.312373   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:04.424132   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:04.424162   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:04.482298   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:04.482326   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:04.526805   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:42:04.526832   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:04.564842   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:42:04.564871   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:04.602297   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:04.602323   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.137972   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:42:07.143165   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 200:
	ok
	I0815 18:42:07.144155   67936 api_server.go:141] control plane version: v1.31.0
	I0815 18:42:07.144174   67936 api_server.go:131] duration metric: took 3.876063215s to wait for apiserver health ...
	I0815 18:42:07.144182   67936 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:42:07.144201   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:42:07.144243   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:42:07.185685   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:07.185709   67936 cri.go:89] found id: ""
	I0815 18:42:07.185717   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:42:07.185782   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.190086   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:42:07.190179   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:42:07.233020   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:07.233044   67936 cri.go:89] found id: ""
	I0815 18:42:07.233053   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:42:07.233114   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.237639   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:42:07.237698   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:42:07.277613   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:07.277642   67936 cri.go:89] found id: ""
	I0815 18:42:07.277652   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:42:07.277714   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.282273   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:42:07.282346   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:42:07.324972   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:07.325003   67936 cri.go:89] found id: ""
	I0815 18:42:07.325013   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:42:07.325071   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.329402   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:42:07.329470   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:42:07.369812   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:07.369840   67936 cri.go:89] found id: ""
	I0815 18:42:07.369849   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:42:07.369902   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.373993   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:42:07.374055   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:42:07.412036   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:07.412062   67936 cri.go:89] found id: ""
	I0815 18:42:07.412072   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:42:07.412145   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.416191   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:42:07.416263   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:42:07.457677   67936 cri.go:89] found id: ""
	I0815 18:42:07.457710   67936 logs.go:276] 0 containers: []
	W0815 18:42:07.457721   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:42:07.457728   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:42:07.457792   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:42:07.498173   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:07.498199   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.498204   67936 cri.go:89] found id: ""
	I0815 18:42:07.498210   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:42:07.498268   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.502704   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.506501   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:42:07.506520   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:07.542685   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:07.542713   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:07.584070   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:42:07.584097   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:07.634780   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:07.634812   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.669410   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:07.669436   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:08.062406   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:42:08.062454   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:42:08.077171   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:42:08.077209   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:08.186125   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:08.186158   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:08.229621   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:42:08.229655   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:08.266791   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:08.266818   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:08.314172   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:42:08.314197   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:42:08.388793   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:08.388837   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:08.438287   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:42:08.438317   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:10.990845   67936 system_pods.go:59] 8 kube-system pods found
	I0815 18:42:10.990875   67936 system_pods.go:61] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running
	I0815 18:42:10.990879   67936 system_pods.go:61] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running
	I0815 18:42:10.990883   67936 system_pods.go:61] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running
	I0815 18:42:10.990887   67936 system_pods.go:61] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running
	I0815 18:42:10.990890   67936 system_pods.go:61] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running
	I0815 18:42:10.990894   67936 system_pods.go:61] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running
	I0815 18:42:10.990900   67936 system_pods.go:61] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:42:10.990905   67936 system_pods.go:61] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running
	I0815 18:42:10.990913   67936 system_pods.go:74] duration metric: took 3.846725869s to wait for pod list to return data ...
	I0815 18:42:10.990919   67936 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:42:10.993933   67936 default_sa.go:45] found service account: "default"
	I0815 18:42:10.993958   67936 default_sa.go:55] duration metric: took 3.032805ms for default service account to be created ...
	I0815 18:42:10.993968   67936 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:42:10.998531   67936 system_pods.go:86] 8 kube-system pods found
	I0815 18:42:10.998553   67936 system_pods.go:89] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running
	I0815 18:42:10.998558   67936 system_pods.go:89] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running
	I0815 18:42:10.998562   67936 system_pods.go:89] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running
	I0815 18:42:10.998567   67936 system_pods.go:89] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running
	I0815 18:42:10.998570   67936 system_pods.go:89] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running
	I0815 18:42:10.998575   67936 system_pods.go:89] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running
	I0815 18:42:10.998582   67936 system_pods.go:89] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:42:10.998586   67936 system_pods.go:89] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running
	I0815 18:42:10.998592   67936 system_pods.go:126] duration metric: took 4.619003ms to wait for k8s-apps to be running ...
	I0815 18:42:10.998598   67936 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:42:10.998638   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:42:11.015236   67936 system_svc.go:56] duration metric: took 16.627802ms WaitForService to wait for kubelet
	I0815 18:42:11.015260   67936 kubeadm.go:582] duration metric: took 4m20.517256799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:42:11.015280   67936 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:42:11.018544   67936 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:42:11.018570   67936 node_conditions.go:123] node cpu capacity is 2
	I0815 18:42:11.018584   67936 node_conditions.go:105] duration metric: took 3.298753ms to run NodePressure ...
	I0815 18:42:11.018598   67936 start.go:241] waiting for startup goroutines ...
	I0815 18:42:11.018611   67936 start.go:246] waiting for cluster config update ...
	I0815 18:42:11.018626   67936 start.go:255] writing updated cluster config ...
	I0815 18:42:11.018907   67936 ssh_runner.go:195] Run: rm -f paused
	I0815 18:42:11.065371   67936 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:42:11.067513   67936 out.go:177] * Done! kubectl is now configured to use "no-preload-599042" cluster and "default" namespace by default
	I0815 18:42:12.186839   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:12.187041   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:42:32.187938   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:32.188123   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.189799   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:43:12.190012   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.190023   68713 kubeadm.go:310] 
	I0815 18:43:12.190075   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:43:12.190133   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:43:12.190148   68713 kubeadm.go:310] 
	I0815 18:43:12.190205   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:43:12.190265   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:43:12.190394   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:43:12.190403   68713 kubeadm.go:310] 
	I0815 18:43:12.190523   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:43:12.190571   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:43:12.190627   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:43:12.190636   68713 kubeadm.go:310] 
	I0815 18:43:12.190772   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:43:12.190928   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:43:12.190950   68713 kubeadm.go:310] 
	I0815 18:43:12.191108   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:43:12.191218   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:43:12.191344   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:43:12.191478   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:43:12.191504   68713 kubeadm.go:310] 
	I0815 18:43:12.192283   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:43:12.192421   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:43:12.192523   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0815 18:43:12.192655   68713 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 18:43:12.192699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:43:12.658571   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:43:12.675797   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:43:12.687340   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:43:12.687370   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:43:12.687422   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:43:12.698401   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:43:12.698464   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:43:12.709632   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:43:12.720330   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:43:12.720386   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:43:12.731593   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.742122   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:43:12.742185   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.753042   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:43:12.762799   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:43:12.762855   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:43:12.772788   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:43:12.987927   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:45:08.956975   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:45:08.957069   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 18:45:08.958834   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:45:08.958904   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:45:08.958993   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:45:08.959133   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:45:08.959280   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:45:08.959376   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:45:08.961205   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:45:08.961294   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:45:08.961352   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:45:08.961424   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:45:08.961475   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:45:08.961536   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:45:08.961581   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:45:08.961637   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:45:08.961689   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:45:08.961795   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:45:08.961910   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:45:08.961971   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:45:08.962030   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:45:08.962078   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:45:08.962127   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:45:08.962214   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:45:08.962316   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:45:08.962448   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:45:08.962565   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:45:08.962626   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:45:08.962724   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:45:08.964403   68713 out.go:235]   - Booting up control plane ...
	I0815 18:45:08.964526   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:45:08.964631   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:45:08.964736   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:45:08.964866   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:45:08.965043   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:45:08.965121   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:45:08.965225   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965418   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965508   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965703   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965766   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965919   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965981   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966140   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966200   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966381   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966389   68713 kubeadm.go:310] 
	I0815 18:45:08.966438   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:45:08.966473   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:45:08.966481   68713 kubeadm.go:310] 
	I0815 18:45:08.966533   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:45:08.966580   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:45:08.966711   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:45:08.966718   68713 kubeadm.go:310] 
	I0815 18:45:08.966844   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:45:08.966900   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:45:08.966948   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:45:08.966958   68713 kubeadm.go:310] 
	I0815 18:45:08.967082   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:45:08.967201   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:45:08.967214   68713 kubeadm.go:310] 
	I0815 18:45:08.967341   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:45:08.967450   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:45:08.967548   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:45:08.967646   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:45:08.967678   68713 kubeadm.go:310] 
	I0815 18:45:08.967716   68713 kubeadm.go:394] duration metric: took 7m56.388213745s to StartCluster
	I0815 18:45:08.967768   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:45:08.967834   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:45:09.013913   68713 cri.go:89] found id: ""
	I0815 18:45:09.013943   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.013954   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:45:09.013961   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:45:09.014030   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:45:09.051370   68713 cri.go:89] found id: ""
	I0815 18:45:09.051395   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.051403   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:45:09.051409   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:45:09.051477   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:45:09.086615   68713 cri.go:89] found id: ""
	I0815 18:45:09.086646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.086653   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:45:09.086659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:45:09.086708   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:45:09.122335   68713 cri.go:89] found id: ""
	I0815 18:45:09.122370   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.122381   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:45:09.122389   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:45:09.122453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:45:09.163207   68713 cri.go:89] found id: ""
	I0815 18:45:09.163232   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.163241   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:45:09.163247   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:45:09.163308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:45:09.199396   68713 cri.go:89] found id: ""
	I0815 18:45:09.199426   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.199437   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:45:09.199444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:45:09.199504   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:45:09.235073   68713 cri.go:89] found id: ""
	I0815 18:45:09.235101   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.235112   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:45:09.235120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:45:09.235180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:45:09.271614   68713 cri.go:89] found id: ""
	I0815 18:45:09.271646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.271659   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:45:09.271671   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:45:09.271686   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:45:09.372192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:45:09.372214   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:45:09.372231   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:45:09.496743   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:45:09.496780   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:45:09.540434   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:45:09.540471   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:45:09.595546   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:45:09.595584   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 18:45:09.609831   68713 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 18:45:09.609885   68713 out.go:270] * 
	W0815 18:45:09.609942   68713 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.609956   68713 out.go:270] * 
	W0815 18:45:09.610794   68713 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 18:45:09.614213   68713 out.go:201] 
	W0815 18:45:09.615379   68713 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.615420   68713 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 18:45:09.615437   68713 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 18:45:09.616840   68713 out.go:201] 
	
	
	==> CRI-O <==
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.849804212Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748054849777028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d920f1b9-c233-4596-9dd2-ebc92bbfbb29 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.850354990Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=582ec55e-05f7-4220-8f98-625e000ddd10 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.850402732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=582ec55e-05f7-4220-8f98-625e000ddd10 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.850435667Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=582ec55e-05f7-4220-8f98-625e000ddd10 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.883920313Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e46962c4-17db-45b5-b91b-c8be39ac56e7 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.883998570Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e46962c4-17db-45b5-b91b-c8be39ac56e7 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.885301472Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06ffcb6d-daa2-4fcc-8787-c343ea83e59d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.885768122Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748054885739389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06ffcb6d-daa2-4fcc-8787-c343ea83e59d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.886269123Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4bd5f98-620e-4fbe-acdb-5c4e3d1a05da name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.886338438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4bd5f98-620e-4fbe-acdb-5c4e3d1a05da name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.886377914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e4bd5f98-620e-4fbe-acdb-5c4e3d1a05da name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.918299160Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d580f64-04d9-4fb7-b8cc-bdf738779756 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.918384996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d580f64-04d9-4fb7-b8cc-bdf738779756 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.919271070Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de1eaf98-8f30-45f9-8c14-2bfd2d2836fa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.919720266Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748054919693930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de1eaf98-8f30-45f9-8c14-2bfd2d2836fa name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.920173861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be805536-1d78-4d49-9e1c-cc5b3fc3f92d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.920242001Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be805536-1d78-4d49-9e1c-cc5b3fc3f92d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.920286083Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=be805536-1d78-4d49-9e1c-cc5b3fc3f92d name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.953473211Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95348d21-66e1-4f0f-80f0-da4bfa6badc8 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.953605882Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95348d21-66e1-4f0f-80f0-da4bfa6badc8 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.955037963Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9959332-e448-4fdd-913a-0fcba0958cda name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.955431114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748054955409138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9959332-e448-4fdd-913a-0fcba0958cda name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.956088696Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cf4fe262-59de-4e77-b082-2f91f871d365 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.956167789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cf4fe262-59de-4e77-b082-2f91f871d365 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:54:14 old-k8s-version-278865 crio[649]: time="2024-08-15 18:54:14.956201783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cf4fe262-59de-4e77-b082-2f91f871d365 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug15 18:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055068] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040001] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.968285] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.579604] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.625301] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug15 18:37] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.058621] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064012] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.191090] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.131642] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.264819] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.501610] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.065792] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.624202] systemd-fstab-generator[1024]: Ignoring "noauto" option for root device
	[ +13.041505] kauditd_printk_skb: 46 callbacks suppressed
	[Aug15 18:41] systemd-fstab-generator[5085]: Ignoring "noauto" option for root device
	[Aug15 18:43] systemd-fstab-generator[5373]: Ignoring "noauto" option for root device
	[  +0.068065] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:54:15 up 17 min,  0 users,  load average: 0.00, 0.04, 0.06
	Linux old-k8s-version-278865 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager.(*ListPager).List(0xc0009d7e60, 0x4f7fe00, 0xc000120010, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/pager/pager.go:91 +0x179
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1.1(0xc000095b60, 0xc00053e2a0, 0xc000025a70, 0xc000404d20, 0xc0003990ec, 0xc000404d30, 0xc00094fe60)
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:302 +0x1a5
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatch.func1
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:268 +0x295
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]: goroutine 154 [select]:
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]: net.(*Resolver).lookupIPAddr(0x70c5740, 0x4f7fe40, 0xc0001dc1e0, 0x48ab5d6, 0x3, 0xc000b1c300, 0x1f, 0x20fb, 0x0, 0x0, ...)
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]:         /usr/local/go/src/net/lookup.go:299 +0x685
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]: net.(*Resolver).internetAddrList(0x70c5740, 0x4f7fe40, 0xc0001dc1e0, 0x48ab5d6, 0x3, 0xc000b1c300, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc0001dc1e0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000b1c300, 0x24, 0x0, ...)
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]: net.(*Dialer).DialContext(0xc000c65bc0, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b1c300, 0x24, 0x0, 0x0, 0x0, ...)
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000ba6280, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b1c300, 0x24, 0x60, 0x7fd688b87ec8, 0x118, ...)
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]: net/http.(*Transport).dial(0xc0008d2780, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b1c300, 0x24, 0x0, 0x12c, 0x9f00000096, ...)
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]: net/http.(*Transport).dialConn(0xc0008d2780, 0x4f7fe00, 0xc000120018, 0x0, 0xc00094ff20, 0x5, 0xc000b1c300, 0x24, 0x0, 0xc0004430e0, ...)
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]: net/http.(*Transport).dialConnFor(0xc0008d2780, 0xc00075e000)
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]: created by net/http.(*Transport).queueForDial
	Aug 15 18:54:15 old-k8s-version-278865 kubelet[6566]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-278865 -n old-k8s-version-278865
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-278865 -n old-k8s-version-278865: exit status 2 (224.195756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-278865" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (501.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-423062 -n default-k8s-diff-port-423062
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-15 18:58:48.600054927 +0000 UTC m=+6819.578160099
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-423062 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-423062 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.726µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-423062 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-423062 -n default-k8s-diff-port-423062
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-423062 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-423062 logs -n 25: (1.392451324s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p no-preload-599042                  | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC | 15 Aug 24 18:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-555028                 | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-423062       | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-278865             | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:55 UTC | 15 Aug 24 18:55 UTC |
	| start   | -p newest-cni-828957 --memory=2200 --alsologtostderr   | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:55 UTC | 15 Aug 24 18:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-828957             | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:56 UTC | 15 Aug 24 18:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-828957                                   | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:56 UTC | 15 Aug 24 18:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-828957                  | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:56 UTC | 15 Aug 24 18:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-828957 --memory=2200 --alsologtostderr   | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:56 UTC | 15 Aug 24 18:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-828957 image list                           | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:57 UTC | 15 Aug 24 18:57 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-828957                                   | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:57 UTC | 15 Aug 24 18:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-828957                                   | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:57 UTC | 15 Aug 24 18:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:57 UTC | 15 Aug 24 18:57 UTC |
	| delete  | -p newest-cni-828957                                   | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:57 UTC | 15 Aug 24 18:57 UTC |
	| start   | -p auto-443473 --memory=3072                           | auto-443473                  | jenkins | v1.33.1 | 15 Aug 24 18:57 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p newest-cni-828957                                   | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:57 UTC | 15 Aug 24 18:57 UTC |
	| start   | -p kindnet-443473                                      | kindnet-443473               | jenkins | v1.33.1 | 15 Aug 24 18:57 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:57 UTC | 15 Aug 24 18:57 UTC |
	| start   | -p calico-443473 --memory=3072                         | calico-443473                | jenkins | v1.33.1 | 15 Aug 24 18:57 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:57:38
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:57:38.506531   76538 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:57:38.506774   76538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:57:38.506789   76538 out.go:358] Setting ErrFile to fd 2...
	I0815 18:57:38.506795   76538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:57:38.506947   76538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:57:38.507493   76538 out.go:352] Setting JSON to false
	I0815 18:57:38.508424   76538 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9604,"bootTime":1723738654,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:57:38.508476   76538 start.go:139] virtualization: kvm guest
	I0815 18:57:38.510809   76538 out.go:177] * [calico-443473] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:57:38.512281   76538 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:57:38.512347   76538 notify.go:220] Checking for updates...
	I0815 18:57:38.514803   76538 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:57:38.516193   76538 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:57:38.517634   76538 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:57:38.519193   76538 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:57:38.520755   76538 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:57:38.522554   76538 config.go:182] Loaded profile config "auto-443473": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:57:38.522695   76538 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:57:38.522842   76538 config.go:182] Loaded profile config "kindnet-443473": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:57:38.522950   76538 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:57:38.560144   76538 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 18:57:38.561621   76538 start.go:297] selected driver: kvm2
	I0815 18:57:38.561647   76538 start.go:901] validating driver "kvm2" against <nil>
	I0815 18:57:38.561663   76538 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:57:38.562691   76538 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:57:38.562779   76538 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:57:38.577709   76538 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:57:38.577752   76538 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 18:57:38.577949   76538 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:57:38.578007   76538 cni.go:84] Creating CNI manager for "calico"
	I0815 18:57:38.578017   76538 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0815 18:57:38.578066   76538 start.go:340] cluster config:
	{Name:calico-443473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-443473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:57:38.578179   76538 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:57:38.580033   76538 out.go:177] * Starting "calico-443473" primary control-plane node in "calico-443473" cluster
	I0815 18:57:35.584570   76153 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 18:57:35.584718   76153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:57:35.584760   76153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:57:35.600162   76153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35293
	I0815 18:57:35.600602   76153 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:57:35.601237   76153 main.go:141] libmachine: Using API Version  1
	I0815 18:57:35.601264   76153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:57:35.601652   76153 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:57:35.601871   76153 main.go:141] libmachine: (auto-443473) Calling .GetMachineName
	I0815 18:57:35.602069   76153 main.go:141] libmachine: (auto-443473) Calling .DriverName
	I0815 18:57:35.602247   76153 start.go:159] libmachine.API.Create for "auto-443473" (driver="kvm2")
	I0815 18:57:35.602273   76153 client.go:168] LocalClient.Create starting
	I0815 18:57:35.602312   76153 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem
	I0815 18:57:35.602355   76153 main.go:141] libmachine: Decoding PEM data...
	I0815 18:57:35.602378   76153 main.go:141] libmachine: Parsing certificate...
	I0815 18:57:35.602455   76153 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem
	I0815 18:57:35.602481   76153 main.go:141] libmachine: Decoding PEM data...
	I0815 18:57:35.602501   76153 main.go:141] libmachine: Parsing certificate...
	I0815 18:57:35.602525   76153 main.go:141] libmachine: Running pre-create checks...
	I0815 18:57:35.602542   76153 main.go:141] libmachine: (auto-443473) Calling .PreCreateCheck
	I0815 18:57:35.603016   76153 main.go:141] libmachine: (auto-443473) Calling .GetConfigRaw
	I0815 18:57:35.603464   76153 main.go:141] libmachine: Creating machine...
	I0815 18:57:35.603481   76153 main.go:141] libmachine: (auto-443473) Calling .Create
	I0815 18:57:35.603655   76153 main.go:141] libmachine: (auto-443473) Creating KVM machine...
	I0815 18:57:35.605139   76153 main.go:141] libmachine: (auto-443473) DBG | found existing default KVM network
	I0815 18:57:35.606385   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:35.606216   76195 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014ac0}
	I0815 18:57:35.606419   76153 main.go:141] libmachine: (auto-443473) DBG | created network xml: 
	I0815 18:57:35.606435   76153 main.go:141] libmachine: (auto-443473) DBG | <network>
	I0815 18:57:35.606448   76153 main.go:141] libmachine: (auto-443473) DBG |   <name>mk-auto-443473</name>
	I0815 18:57:35.606457   76153 main.go:141] libmachine: (auto-443473) DBG |   <dns enable='no'/>
	I0815 18:57:35.606462   76153 main.go:141] libmachine: (auto-443473) DBG |   
	I0815 18:57:35.606473   76153 main.go:141] libmachine: (auto-443473) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0815 18:57:35.606480   76153 main.go:141] libmachine: (auto-443473) DBG |     <dhcp>
	I0815 18:57:35.606489   76153 main.go:141] libmachine: (auto-443473) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0815 18:57:35.606500   76153 main.go:141] libmachine: (auto-443473) DBG |     </dhcp>
	I0815 18:57:35.606508   76153 main.go:141] libmachine: (auto-443473) DBG |   </ip>
	I0815 18:57:35.606514   76153 main.go:141] libmachine: (auto-443473) DBG |   
	I0815 18:57:35.606528   76153 main.go:141] libmachine: (auto-443473) DBG | </network>
	I0815 18:57:35.606534   76153 main.go:141] libmachine: (auto-443473) DBG | 
	I0815 18:57:35.611842   76153 main.go:141] libmachine: (auto-443473) DBG | trying to create private KVM network mk-auto-443473 192.168.39.0/24...
	I0815 18:57:35.686334   76153 main.go:141] libmachine: (auto-443473) Setting up store path in /home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473 ...
	I0815 18:57:35.686365   76153 main.go:141] libmachine: (auto-443473) Building disk image from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 18:57:35.686377   76153 main.go:141] libmachine: (auto-443473) DBG | private KVM network mk-auto-443473 192.168.39.0/24 created
	I0815 18:57:35.686401   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:35.686281   76195 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:57:35.686428   76153 main.go:141] libmachine: (auto-443473) Downloading /home/jenkins/minikube-integration/19450-13013/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 18:57:35.974245   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:35.974079   76195 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473/id_rsa...
	I0815 18:57:36.231637   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:36.231499   76195 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473/auto-443473.rawdisk...
	I0815 18:57:36.231671   76153 main.go:141] libmachine: (auto-443473) DBG | Writing magic tar header
	I0815 18:57:36.231685   76153 main.go:141] libmachine: (auto-443473) DBG | Writing SSH key tar header
	I0815 18:57:36.231693   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:36.231627   76195 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473 ...
	I0815 18:57:36.231711   76153 main.go:141] libmachine: (auto-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473
	I0815 18:57:36.231789   76153 main.go:141] libmachine: (auto-443473) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473 (perms=drwx------)
	I0815 18:57:36.231821   76153 main.go:141] libmachine: (auto-443473) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines (perms=drwxr-xr-x)
	I0815 18:57:36.231835   76153 main.go:141] libmachine: (auto-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines
	I0815 18:57:36.231847   76153 main.go:141] libmachine: (auto-443473) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube (perms=drwxr-xr-x)
	I0815 18:57:36.231865   76153 main.go:141] libmachine: (auto-443473) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013 (perms=drwxrwxr-x)
	I0815 18:57:36.231877   76153 main.go:141] libmachine: (auto-443473) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 18:57:36.231891   76153 main.go:141] libmachine: (auto-443473) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 18:57:36.231904   76153 main.go:141] libmachine: (auto-443473) Creating domain...
	I0815 18:57:36.231914   76153 main.go:141] libmachine: (auto-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:57:36.231926   76153 main.go:141] libmachine: (auto-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013
	I0815 18:57:36.231935   76153 main.go:141] libmachine: (auto-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 18:57:36.231951   76153 main.go:141] libmachine: (auto-443473) DBG | Checking permissions on dir: /home/jenkins
	I0815 18:57:36.231962   76153 main.go:141] libmachine: (auto-443473) DBG | Checking permissions on dir: /home
	I0815 18:57:36.231979   76153 main.go:141] libmachine: (auto-443473) DBG | Skipping /home - not owner
	I0815 18:57:36.232974   76153 main.go:141] libmachine: (auto-443473) define libvirt domain using xml: 
	I0815 18:57:36.232995   76153 main.go:141] libmachine: (auto-443473) <domain type='kvm'>
	I0815 18:57:36.233005   76153 main.go:141] libmachine: (auto-443473)   <name>auto-443473</name>
	I0815 18:57:36.233019   76153 main.go:141] libmachine: (auto-443473)   <memory unit='MiB'>3072</memory>
	I0815 18:57:36.233031   76153 main.go:141] libmachine: (auto-443473)   <vcpu>2</vcpu>
	I0815 18:57:36.233038   76153 main.go:141] libmachine: (auto-443473)   <features>
	I0815 18:57:36.233047   76153 main.go:141] libmachine: (auto-443473)     <acpi/>
	I0815 18:57:36.233054   76153 main.go:141] libmachine: (auto-443473)     <apic/>
	I0815 18:57:36.233063   76153 main.go:141] libmachine: (auto-443473)     <pae/>
	I0815 18:57:36.233085   76153 main.go:141] libmachine: (auto-443473)     
	I0815 18:57:36.233138   76153 main.go:141] libmachine: (auto-443473)   </features>
	I0815 18:57:36.233164   76153 main.go:141] libmachine: (auto-443473)   <cpu mode='host-passthrough'>
	I0815 18:57:36.233174   76153 main.go:141] libmachine: (auto-443473)   
	I0815 18:57:36.233181   76153 main.go:141] libmachine: (auto-443473)   </cpu>
	I0815 18:57:36.233192   76153 main.go:141] libmachine: (auto-443473)   <os>
	I0815 18:57:36.233198   76153 main.go:141] libmachine: (auto-443473)     <type>hvm</type>
	I0815 18:57:36.233206   76153 main.go:141] libmachine: (auto-443473)     <boot dev='cdrom'/>
	I0815 18:57:36.233211   76153 main.go:141] libmachine: (auto-443473)     <boot dev='hd'/>
	I0815 18:57:36.233217   76153 main.go:141] libmachine: (auto-443473)     <bootmenu enable='no'/>
	I0815 18:57:36.233222   76153 main.go:141] libmachine: (auto-443473)   </os>
	I0815 18:57:36.233238   76153 main.go:141] libmachine: (auto-443473)   <devices>
	I0815 18:57:36.233252   76153 main.go:141] libmachine: (auto-443473)     <disk type='file' device='cdrom'>
	I0815 18:57:36.233265   76153 main.go:141] libmachine: (auto-443473)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473/boot2docker.iso'/>
	I0815 18:57:36.233281   76153 main.go:141] libmachine: (auto-443473)       <target dev='hdc' bus='scsi'/>
	I0815 18:57:36.233293   76153 main.go:141] libmachine: (auto-443473)       <readonly/>
	I0815 18:57:36.233301   76153 main.go:141] libmachine: (auto-443473)     </disk>
	I0815 18:57:36.233311   76153 main.go:141] libmachine: (auto-443473)     <disk type='file' device='disk'>
	I0815 18:57:36.233317   76153 main.go:141] libmachine: (auto-443473)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 18:57:36.233325   76153 main.go:141] libmachine: (auto-443473)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473/auto-443473.rawdisk'/>
	I0815 18:57:36.233335   76153 main.go:141] libmachine: (auto-443473)       <target dev='hda' bus='virtio'/>
	I0815 18:57:36.233347   76153 main.go:141] libmachine: (auto-443473)     </disk>
	I0815 18:57:36.233362   76153 main.go:141] libmachine: (auto-443473)     <interface type='network'>
	I0815 18:57:36.233374   76153 main.go:141] libmachine: (auto-443473)       <source network='mk-auto-443473'/>
	I0815 18:57:36.233384   76153 main.go:141] libmachine: (auto-443473)       <model type='virtio'/>
	I0815 18:57:36.233393   76153 main.go:141] libmachine: (auto-443473)     </interface>
	I0815 18:57:36.233403   76153 main.go:141] libmachine: (auto-443473)     <interface type='network'>
	I0815 18:57:36.233412   76153 main.go:141] libmachine: (auto-443473)       <source network='default'/>
	I0815 18:57:36.233420   76153 main.go:141] libmachine: (auto-443473)       <model type='virtio'/>
	I0815 18:57:36.233442   76153 main.go:141] libmachine: (auto-443473)     </interface>
	I0815 18:57:36.233457   76153 main.go:141] libmachine: (auto-443473)     <serial type='pty'>
	I0815 18:57:36.233469   76153 main.go:141] libmachine: (auto-443473)       <target port='0'/>
	I0815 18:57:36.233480   76153 main.go:141] libmachine: (auto-443473)     </serial>
	I0815 18:57:36.233491   76153 main.go:141] libmachine: (auto-443473)     <console type='pty'>
	I0815 18:57:36.233502   76153 main.go:141] libmachine: (auto-443473)       <target type='serial' port='0'/>
	I0815 18:57:36.233513   76153 main.go:141] libmachine: (auto-443473)     </console>
	I0815 18:57:36.233521   76153 main.go:141] libmachine: (auto-443473)     <rng model='virtio'>
	I0815 18:57:36.233539   76153 main.go:141] libmachine: (auto-443473)       <backend model='random'>/dev/random</backend>
	I0815 18:57:36.233555   76153 main.go:141] libmachine: (auto-443473)     </rng>
	I0815 18:57:36.233565   76153 main.go:141] libmachine: (auto-443473)     
	I0815 18:57:36.233577   76153 main.go:141] libmachine: (auto-443473)     
	I0815 18:57:36.233589   76153 main.go:141] libmachine: (auto-443473)   </devices>
	I0815 18:57:36.233599   76153 main.go:141] libmachine: (auto-443473) </domain>
	I0815 18:57:36.233611   76153 main.go:141] libmachine: (auto-443473) 
	I0815 18:57:36.237592   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:5d:c4:f1 in network default
	I0815 18:57:36.238123   76153 main.go:141] libmachine: (auto-443473) Ensuring networks are active...
	I0815 18:57:36.238143   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:36.238771   76153 main.go:141] libmachine: (auto-443473) Ensuring network default is active
	I0815 18:57:36.239094   76153 main.go:141] libmachine: (auto-443473) Ensuring network mk-auto-443473 is active
	I0815 18:57:36.239641   76153 main.go:141] libmachine: (auto-443473) Getting domain xml...
	I0815 18:57:36.240272   76153 main.go:141] libmachine: (auto-443473) Creating domain...
	I0815 18:57:37.545201   76153 main.go:141] libmachine: (auto-443473) Waiting to get IP...
	I0815 18:57:37.546250   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:37.546775   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find current IP address of domain auto-443473 in network mk-auto-443473
	I0815 18:57:37.546809   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:37.546762   76195 retry.go:31] will retry after 302.278174ms: waiting for machine to come up
	I0815 18:57:38.241910   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:38.242546   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find current IP address of domain auto-443473 in network mk-auto-443473
	I0815 18:57:38.242574   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:38.242503   76195 retry.go:31] will retry after 373.88391ms: waiting for machine to come up
	I0815 18:57:38.617947   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:38.618580   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find current IP address of domain auto-443473 in network mk-auto-443473
	I0815 18:57:38.618617   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:38.618555   76195 retry.go:31] will retry after 385.61014ms: waiting for machine to come up
	I0815 18:57:39.006105   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:39.006720   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find current IP address of domain auto-443473 in network mk-auto-443473
	I0815 18:57:39.006742   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:39.006669   76195 retry.go:31] will retry after 438.666212ms: waiting for machine to come up
	I0815 18:57:39.447325   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:39.447706   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find current IP address of domain auto-443473 in network mk-auto-443473
	I0815 18:57:39.447734   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:39.447657   76195 retry.go:31] will retry after 630.763048ms: waiting for machine to come up
	I0815 18:57:36.080138   76334 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:57:36.080168   76334 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:57:36.080179   76334 cache.go:56] Caching tarball of preloaded images
	I0815 18:57:36.080248   76334 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:57:36.080260   76334 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 18:57:36.080372   76334 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/config.json ...
	I0815 18:57:36.080396   76334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/config.json: {Name:mk5d91b5014c2faa3696f901308dc74eedd8d169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:57:36.080566   76334 start.go:360] acquireMachinesLock for kindnet-443473: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:57:38.581256   76538 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:57:38.581290   76538 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:57:38.581297   76538 cache.go:56] Caching tarball of preloaded images
	I0815 18:57:38.581383   76538 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:57:38.581396   76538 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 18:57:38.581502   76538 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/calico-443473/config.json ...
	I0815 18:57:38.581522   76538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/calico-443473/config.json: {Name:mk33c44ad1e423096adf92a869020e712b6782d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:57:38.581676   76538 start.go:360] acquireMachinesLock for calico-443473: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:57:40.079819   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:40.080364   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find current IP address of domain auto-443473 in network mk-auto-443473
	I0815 18:57:40.080396   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:40.080314   76195 retry.go:31] will retry after 857.414314ms: waiting for machine to come up
	I0815 18:57:40.938670   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:40.939109   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find current IP address of domain auto-443473 in network mk-auto-443473
	I0815 18:57:40.939154   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:40.939057   76195 retry.go:31] will retry after 1.158921348s: waiting for machine to come up
	I0815 18:57:42.099427   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:42.099869   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find current IP address of domain auto-443473 in network mk-auto-443473
	I0815 18:57:42.099896   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:42.099833   76195 retry.go:31] will retry after 1.352788537s: waiting for machine to come up
	I0815 18:57:43.453720   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:43.454162   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find current IP address of domain auto-443473 in network mk-auto-443473
	I0815 18:57:43.454184   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:43.454133   76195 retry.go:31] will retry after 1.397452263s: waiting for machine to come up
	I0815 18:57:44.853799   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:44.854315   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find current IP address of domain auto-443473 in network mk-auto-443473
	I0815 18:57:44.854353   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:44.854260   76195 retry.go:31] will retry after 2.059441519s: waiting for machine to come up
	I0815 18:57:46.914982   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:46.915538   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find current IP address of domain auto-443473 in network mk-auto-443473
	I0815 18:57:46.915567   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:46.915479   76195 retry.go:31] will retry after 2.684452056s: waiting for machine to come up
	I0815 18:57:49.603200   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:49.603725   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find current IP address of domain auto-443473 in network mk-auto-443473
	I0815 18:57:49.603755   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:49.603675   76195 retry.go:31] will retry after 3.24944312s: waiting for machine to come up
	I0815 18:57:52.855564   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:52.856000   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find current IP address of domain auto-443473 in network mk-auto-443473
	I0815 18:57:52.856021   76153 main.go:141] libmachine: (auto-443473) DBG | I0815 18:57:52.855960   76195 retry.go:31] will retry after 4.268475637s: waiting for machine to come up
	I0815 18:57:58.541289   76334 start.go:364] duration metric: took 22.460683899s to acquireMachinesLock for "kindnet-443473"
	I0815 18:57:58.541350   76334 start.go:93] Provisioning new machine with config: &{Name:kindnet-443473 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:kindnet-443473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:57:58.541483   76334 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 18:57:57.126212   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.126682   76153 main.go:141] libmachine: (auto-443473) Found IP for machine: 192.168.39.187
	I0815 18:57:57.126698   76153 main.go:141] libmachine: (auto-443473) Reserving static IP address...
	I0815 18:57:57.126709   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has current primary IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.127046   76153 main.go:141] libmachine: (auto-443473) DBG | unable to find host DHCP lease matching {name: "auto-443473", mac: "52:54:00:c6:88:11", ip: "192.168.39.187"} in network mk-auto-443473
	I0815 18:57:57.201573   76153 main.go:141] libmachine: (auto-443473) DBG | Getting to WaitForSSH function...
	I0815 18:57:57.201607   76153 main.go:141] libmachine: (auto-443473) Reserved static IP address: 192.168.39.187
	I0815 18:57:57.201621   76153 main.go:141] libmachine: (auto-443473) Waiting for SSH to be available...
	I0815 18:57:57.204338   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.204761   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:57.204789   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.204942   76153 main.go:141] libmachine: (auto-443473) DBG | Using SSH client type: external
	I0815 18:57:57.204970   76153 main.go:141] libmachine: (auto-443473) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473/id_rsa (-rw-------)
	I0815 18:57:57.205017   76153 main.go:141] libmachine: (auto-443473) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.187 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:57:57.205034   76153 main.go:141] libmachine: (auto-443473) DBG | About to run SSH command:
	I0815 18:57:57.205050   76153 main.go:141] libmachine: (auto-443473) DBG | exit 0
	I0815 18:57:57.328823   76153 main.go:141] libmachine: (auto-443473) DBG | SSH cmd err, output: <nil>: 
	I0815 18:57:57.329111   76153 main.go:141] libmachine: (auto-443473) KVM machine creation complete!
	I0815 18:57:57.329432   76153 main.go:141] libmachine: (auto-443473) Calling .GetConfigRaw
	I0815 18:57:57.329911   76153 main.go:141] libmachine: (auto-443473) Calling .DriverName
	I0815 18:57:57.330072   76153 main.go:141] libmachine: (auto-443473) Calling .DriverName
	I0815 18:57:57.330236   76153 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 18:57:57.330251   76153 main.go:141] libmachine: (auto-443473) Calling .GetState
	I0815 18:57:57.331580   76153 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 18:57:57.331593   76153 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 18:57:57.331599   76153 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 18:57:57.331604   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHHostname
	I0815 18:57:57.334181   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.334549   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:57.334573   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.334713   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHPort
	I0815 18:57:57.334938   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:57.335093   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:57.335269   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHUsername
	I0815 18:57:57.335429   76153 main.go:141] libmachine: Using SSH client type: native
	I0815 18:57:57.335653   76153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0815 18:57:57.335667   76153 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 18:57:57.431655   76153 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:57:57.431680   76153 main.go:141] libmachine: Detecting the provisioner...
	I0815 18:57:57.431688   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHHostname
	I0815 18:57:57.434573   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.434903   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:57.434931   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.435125   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHPort
	I0815 18:57:57.435325   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:57.435516   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:57.435656   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHUsername
	I0815 18:57:57.435814   76153 main.go:141] libmachine: Using SSH client type: native
	I0815 18:57:57.436023   76153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0815 18:57:57.436036   76153 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 18:57:57.537044   76153 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 18:57:57.537109   76153 main.go:141] libmachine: found compatible host: buildroot
	I0815 18:57:57.537118   76153 main.go:141] libmachine: Provisioning with buildroot...
	I0815 18:57:57.537129   76153 main.go:141] libmachine: (auto-443473) Calling .GetMachineName
	I0815 18:57:57.537383   76153 buildroot.go:166] provisioning hostname "auto-443473"
	I0815 18:57:57.537411   76153 main.go:141] libmachine: (auto-443473) Calling .GetMachineName
	I0815 18:57:57.537598   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHHostname
	I0815 18:57:57.540294   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.540654   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:57.540681   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.540792   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHPort
	I0815 18:57:57.540947   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:57.541117   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:57.541230   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHUsername
	I0815 18:57:57.541382   76153 main.go:141] libmachine: Using SSH client type: native
	I0815 18:57:57.541584   76153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0815 18:57:57.541598   76153 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-443473 && echo "auto-443473" | sudo tee /etc/hostname
	I0815 18:57:57.654988   76153 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-443473
	
	I0815 18:57:57.655015   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHHostname
	I0815 18:57:57.657997   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.658720   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:57.658743   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.658980   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHPort
	I0815 18:57:57.659166   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:57.659340   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:57.659459   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHUsername
	I0815 18:57:57.659610   76153 main.go:141] libmachine: Using SSH client type: native
	I0815 18:57:57.659829   76153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0815 18:57:57.659852   76153 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-443473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-443473/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-443473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:57:57.769853   76153 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:57:57.769880   76153 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:57:57.769919   76153 buildroot.go:174] setting up certificates
	I0815 18:57:57.769930   76153 provision.go:84] configureAuth start
	I0815 18:57:57.769941   76153 main.go:141] libmachine: (auto-443473) Calling .GetMachineName
	I0815 18:57:57.770223   76153 main.go:141] libmachine: (auto-443473) Calling .GetIP
	I0815 18:57:57.772872   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.773215   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:57.773250   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.773405   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHHostname
	I0815 18:57:57.775592   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.775923   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:57.775947   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.776059   76153 provision.go:143] copyHostCerts
	I0815 18:57:57.776114   76153 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:57:57.776133   76153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:57:57.776197   76153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:57:57.776336   76153 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:57:57.776347   76153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:57:57.776381   76153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:57:57.776461   76153 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:57:57.776471   76153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:57:57.776511   76153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:57:57.776586   76153 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.auto-443473 san=[127.0.0.1 192.168.39.187 auto-443473 localhost minikube]
	I0815 18:57:57.885866   76153 provision.go:177] copyRemoteCerts
	I0815 18:57:57.885918   76153 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:57:57.885939   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHHostname
	I0815 18:57:57.888913   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.889393   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:57.889422   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:57.889601   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHPort
	I0815 18:57:57.889822   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:57.890026   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHUsername
	I0815 18:57:57.890153   76153 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473/id_rsa Username:docker}
	I0815 18:57:57.970457   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:57:57.994852   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0815 18:57:58.018987   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 18:57:58.043445   76153 provision.go:87] duration metric: took 273.503947ms to configureAuth
	I0815 18:57:58.043471   76153 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:57:58.043637   76153 config.go:182] Loaded profile config "auto-443473": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:57:58.043720   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHHostname
	I0815 18:57:58.046618   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.047009   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:58.047039   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.047218   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHPort
	I0815 18:57:58.047423   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:58.047599   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:58.047740   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHUsername
	I0815 18:57:58.047946   76153 main.go:141] libmachine: Using SSH client type: native
	I0815 18:57:58.048150   76153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0815 18:57:58.048179   76153 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:57:58.310664   76153 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:57:58.310690   76153 main.go:141] libmachine: Checking connection to Docker...
	I0815 18:57:58.310698   76153 main.go:141] libmachine: (auto-443473) Calling .GetURL
	I0815 18:57:58.312032   76153 main.go:141] libmachine: (auto-443473) DBG | Using libvirt version 6000000
	I0815 18:57:58.314588   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.314871   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:58.314897   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.315025   76153 main.go:141] libmachine: Docker is up and running!
	I0815 18:57:58.315038   76153 main.go:141] libmachine: Reticulating splines...
	I0815 18:57:58.315047   76153 client.go:171] duration metric: took 22.712766084s to LocalClient.Create
	I0815 18:57:58.315075   76153 start.go:167] duration metric: took 22.712828619s to libmachine.API.Create "auto-443473"
	I0815 18:57:58.315086   76153 start.go:293] postStartSetup for "auto-443473" (driver="kvm2")
	I0815 18:57:58.315105   76153 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:57:58.315128   76153 main.go:141] libmachine: (auto-443473) Calling .DriverName
	I0815 18:57:58.315376   76153 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:57:58.315396   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHHostname
	I0815 18:57:58.317597   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.317902   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:58.317930   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.318063   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHPort
	I0815 18:57:58.318242   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:58.318386   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHUsername
	I0815 18:57:58.318531   76153 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473/id_rsa Username:docker}
	I0815 18:57:58.399008   76153 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:57:58.403123   76153 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:57:58.403145   76153 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:57:58.403201   76153 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:57:58.403268   76153 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:57:58.403362   76153 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:57:58.412600   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:57:58.436246   76153 start.go:296] duration metric: took 121.141486ms for postStartSetup
	I0815 18:57:58.436295   76153 main.go:141] libmachine: (auto-443473) Calling .GetConfigRaw
	I0815 18:57:58.436949   76153 main.go:141] libmachine: (auto-443473) Calling .GetIP
	I0815 18:57:58.439161   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.439588   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:58.439619   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.439826   76153 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/config.json ...
	I0815 18:57:58.440001   76153 start.go:128] duration metric: took 22.856941288s to createHost
	I0815 18:57:58.440024   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHHostname
	I0815 18:57:58.442151   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.442451   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:58.442478   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.442605   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHPort
	I0815 18:57:58.442762   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:58.442905   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:58.443017   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHUsername
	I0815 18:57:58.443175   76153 main.go:141] libmachine: Using SSH client type: native
	I0815 18:57:58.443387   76153 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.187 22 <nil> <nil>}
	I0815 18:57:58.443401   76153 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:57:58.541110   76153 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723748278.515325703
	
	I0815 18:57:58.541135   76153 fix.go:216] guest clock: 1723748278.515325703
	I0815 18:57:58.541150   76153 fix.go:229] Guest: 2024-08-15 18:57:58.515325703 +0000 UTC Remote: 2024-08-15 18:57:58.4400123 +0000 UTC m=+23.432409597 (delta=75.313403ms)
	I0815 18:57:58.541174   76153 fix.go:200] guest clock delta is within tolerance: 75.313403ms
	I0815 18:57:58.541192   76153 start.go:83] releasing machines lock for "auto-443473", held for 22.958190726s
	I0815 18:57:58.541226   76153 main.go:141] libmachine: (auto-443473) Calling .DriverName
	I0815 18:57:58.541527   76153 main.go:141] libmachine: (auto-443473) Calling .GetIP
	I0815 18:57:58.544347   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.544703   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:58.544730   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.544888   76153 main.go:141] libmachine: (auto-443473) Calling .DriverName
	I0815 18:57:58.545342   76153 main.go:141] libmachine: (auto-443473) Calling .DriverName
	I0815 18:57:58.545510   76153 main.go:141] libmachine: (auto-443473) Calling .DriverName
	I0815 18:57:58.545585   76153 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:57:58.545618   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHHostname
	I0815 18:57:58.545709   76153 ssh_runner.go:195] Run: cat /version.json
	I0815 18:57:58.545753   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHHostname
	I0815 18:57:58.548072   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.548438   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:58.548463   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.548515   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.548628   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHPort
	I0815 18:57:58.548795   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:58.548961   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHUsername
	I0815 18:57:58.549012   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:57:58.549056   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:57:58.549073   76153 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473/id_rsa Username:docker}
	I0815 18:57:58.549182   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHPort
	I0815 18:57:58.549290   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:57:58.549477   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHUsername
	I0815 18:57:58.549627   76153 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473/id_rsa Username:docker}
	I0815 18:57:58.633693   76153 ssh_runner.go:195] Run: systemctl --version
	I0815 18:57:58.654757   76153 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:57:58.822296   76153 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:57:58.828532   76153 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:57:58.828603   76153 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:57:58.847773   76153 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:57:58.847793   76153 start.go:495] detecting cgroup driver to use...
	I0815 18:57:58.847859   76153 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:57:58.868104   76153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:57:58.885110   76153 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:57:58.885171   76153 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:57:58.900412   76153 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:57:58.915156   76153 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:57:59.041635   76153 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:57:59.215571   76153 docker.go:233] disabling docker service ...
	I0815 18:57:59.215640   76153 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:57:59.230755   76153 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:57:59.243134   76153 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:57:59.365987   76153 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:57:59.487853   76153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:57:59.503043   76153 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:57:59.521970   76153 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:57:59.522019   76153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:59.531966   76153 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:57:59.532027   76153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:59.541767   76153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:59.551674   76153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:59.561934   76153 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:57:59.572958   76153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:59.582674   76153 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:59.599548   76153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:59.609986   76153 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:57:59.619053   76153 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:57:59.619110   76153 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:57:59.632544   76153 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:57:59.643221   76153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:57:59.778463   76153 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:57:59.930962   76153 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:57:59.931047   76153 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:57:59.935855   76153 start.go:563] Will wait 60s for crictl version
	I0815 18:57:59.935916   76153 ssh_runner.go:195] Run: which crictl
	I0815 18:57:59.939662   76153 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:57:59.987599   76153 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:57:59.987683   76153 ssh_runner.go:195] Run: crio --version
	I0815 18:58:00.020961   76153 ssh_runner.go:195] Run: crio --version
	I0815 18:58:00.053826   76153 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:57:58.543566   76334 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 18:57:58.543749   76334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:57:58.543794   76334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:57:58.559793   76334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35933
	I0815 18:57:58.560226   76334 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:57:58.560858   76334 main.go:141] libmachine: Using API Version  1
	I0815 18:57:58.560883   76334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:57:58.561198   76334 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:57:58.561398   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetMachineName
	I0815 18:57:58.561547   76334 main.go:141] libmachine: (kindnet-443473) Calling .DriverName
	I0815 18:57:58.561702   76334 start.go:159] libmachine.API.Create for "kindnet-443473" (driver="kvm2")
	I0815 18:57:58.561730   76334 client.go:168] LocalClient.Create starting
	I0815 18:57:58.561761   76334 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem
	I0815 18:57:58.561801   76334 main.go:141] libmachine: Decoding PEM data...
	I0815 18:57:58.561821   76334 main.go:141] libmachine: Parsing certificate...
	I0815 18:57:58.561900   76334 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem
	I0815 18:57:58.561929   76334 main.go:141] libmachine: Decoding PEM data...
	I0815 18:57:58.561942   76334 main.go:141] libmachine: Parsing certificate...
	I0815 18:57:58.561976   76334 main.go:141] libmachine: Running pre-create checks...
	I0815 18:57:58.561989   76334 main.go:141] libmachine: (kindnet-443473) Calling .PreCreateCheck
	I0815 18:57:58.562392   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetConfigRaw
	I0815 18:57:58.562809   76334 main.go:141] libmachine: Creating machine...
	I0815 18:57:58.562823   76334 main.go:141] libmachine: (kindnet-443473) Calling .Create
	I0815 18:57:58.562965   76334 main.go:141] libmachine: (kindnet-443473) Creating KVM machine...
	I0815 18:57:58.564237   76334 main.go:141] libmachine: (kindnet-443473) DBG | found existing default KVM network
	I0815 18:57:58.565474   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:57:58.565308   76672 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ab:d6:46} reservation:<nil>}
	I0815 18:57:58.566440   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:57:58.566360   76672 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fe40}
	I0815 18:57:58.566486   76334 main.go:141] libmachine: (kindnet-443473) DBG | created network xml: 
	I0815 18:57:58.566503   76334 main.go:141] libmachine: (kindnet-443473) DBG | <network>
	I0815 18:57:58.566515   76334 main.go:141] libmachine: (kindnet-443473) DBG |   <name>mk-kindnet-443473</name>
	I0815 18:57:58.566526   76334 main.go:141] libmachine: (kindnet-443473) DBG |   <dns enable='no'/>
	I0815 18:57:58.566536   76334 main.go:141] libmachine: (kindnet-443473) DBG |   
	I0815 18:57:58.566549   76334 main.go:141] libmachine: (kindnet-443473) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0815 18:57:58.566565   76334 main.go:141] libmachine: (kindnet-443473) DBG |     <dhcp>
	I0815 18:57:58.566578   76334 main.go:141] libmachine: (kindnet-443473) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0815 18:57:58.566595   76334 main.go:141] libmachine: (kindnet-443473) DBG |     </dhcp>
	I0815 18:57:58.566606   76334 main.go:141] libmachine: (kindnet-443473) DBG |   </ip>
	I0815 18:57:58.566636   76334 main.go:141] libmachine: (kindnet-443473) DBG |   
	I0815 18:57:58.566659   76334 main.go:141] libmachine: (kindnet-443473) DBG | </network>
	I0815 18:57:58.566671   76334 main.go:141] libmachine: (kindnet-443473) DBG | 
	I0815 18:57:58.572173   76334 main.go:141] libmachine: (kindnet-443473) DBG | trying to create private KVM network mk-kindnet-443473 192.168.50.0/24...
	I0815 18:57:58.641797   76334 main.go:141] libmachine: (kindnet-443473) DBG | private KVM network mk-kindnet-443473 192.168.50.0/24 created
	I0815 18:57:58.641836   76334 main.go:141] libmachine: (kindnet-443473) Setting up store path in /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473 ...
	I0815 18:57:58.641851   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:57:58.641761   76672 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:57:58.641915   76334 main.go:141] libmachine: (kindnet-443473) Building disk image from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 18:57:58.641950   76334 main.go:141] libmachine: (kindnet-443473) Downloading /home/jenkins/minikube-integration/19450-13013/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 18:57:58.884926   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:57:58.884820   76672 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473/id_rsa...
	I0815 18:57:59.010537   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:57:59.010366   76672 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473/kindnet-443473.rawdisk...
	I0815 18:57:59.010593   76334 main.go:141] libmachine: (kindnet-443473) DBG | Writing magic tar header
	I0815 18:57:59.010610   76334 main.go:141] libmachine: (kindnet-443473) DBG | Writing SSH key tar header
	I0815 18:57:59.011237   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:57:59.011137   76672 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473 ...
	I0815 18:57:59.011270   76334 main.go:141] libmachine: (kindnet-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473
	I0815 18:57:59.011336   76334 main.go:141] libmachine: (kindnet-443473) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473 (perms=drwx------)
	I0815 18:57:59.011969   76334 main.go:141] libmachine: (kindnet-443473) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines (perms=drwxr-xr-x)
	I0815 18:57:59.011996   76334 main.go:141] libmachine: (kindnet-443473) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube (perms=drwxr-xr-x)
	I0815 18:57:59.012009   76334 main.go:141] libmachine: (kindnet-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines
	I0815 18:57:59.012031   76334 main.go:141] libmachine: (kindnet-443473) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013 (perms=drwxrwxr-x)
	I0815 18:57:59.012044   76334 main.go:141] libmachine: (kindnet-443473) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 18:57:59.012053   76334 main.go:141] libmachine: (kindnet-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:57:59.012074   76334 main.go:141] libmachine: (kindnet-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013
	I0815 18:57:59.012089   76334 main.go:141] libmachine: (kindnet-443473) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 18:57:59.012099   76334 main.go:141] libmachine: (kindnet-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 18:57:59.012117   76334 main.go:141] libmachine: (kindnet-443473) DBG | Checking permissions on dir: /home/jenkins
	I0815 18:57:59.012129   76334 main.go:141] libmachine: (kindnet-443473) DBG | Checking permissions on dir: /home
	I0815 18:57:59.012143   76334 main.go:141] libmachine: (kindnet-443473) DBG | Skipping /home - not owner
	I0815 18:57:59.012156   76334 main.go:141] libmachine: (kindnet-443473) Creating domain...
	I0815 18:57:59.013327   76334 main.go:141] libmachine: (kindnet-443473) define libvirt domain using xml: 
	I0815 18:57:59.013348   76334 main.go:141] libmachine: (kindnet-443473) <domain type='kvm'>
	I0815 18:57:59.013355   76334 main.go:141] libmachine: (kindnet-443473)   <name>kindnet-443473</name>
	I0815 18:57:59.013374   76334 main.go:141] libmachine: (kindnet-443473)   <memory unit='MiB'>3072</memory>
	I0815 18:57:59.013395   76334 main.go:141] libmachine: (kindnet-443473)   <vcpu>2</vcpu>
	I0815 18:57:59.013411   76334 main.go:141] libmachine: (kindnet-443473)   <features>
	I0815 18:57:59.013452   76334 main.go:141] libmachine: (kindnet-443473)     <acpi/>
	I0815 18:57:59.013479   76334 main.go:141] libmachine: (kindnet-443473)     <apic/>
	I0815 18:57:59.013496   76334 main.go:141] libmachine: (kindnet-443473)     <pae/>
	I0815 18:57:59.013511   76334 main.go:141] libmachine: (kindnet-443473)     
	I0815 18:57:59.013522   76334 main.go:141] libmachine: (kindnet-443473)   </features>
	I0815 18:57:59.013533   76334 main.go:141] libmachine: (kindnet-443473)   <cpu mode='host-passthrough'>
	I0815 18:57:59.013540   76334 main.go:141] libmachine: (kindnet-443473)   
	I0815 18:57:59.013548   76334 main.go:141] libmachine: (kindnet-443473)   </cpu>
	I0815 18:57:59.013553   76334 main.go:141] libmachine: (kindnet-443473)   <os>
	I0815 18:57:59.013560   76334 main.go:141] libmachine: (kindnet-443473)     <type>hvm</type>
	I0815 18:57:59.013566   76334 main.go:141] libmachine: (kindnet-443473)     <boot dev='cdrom'/>
	I0815 18:57:59.013577   76334 main.go:141] libmachine: (kindnet-443473)     <boot dev='hd'/>
	I0815 18:57:59.013585   76334 main.go:141] libmachine: (kindnet-443473)     <bootmenu enable='no'/>
	I0815 18:57:59.013595   76334 main.go:141] libmachine: (kindnet-443473)   </os>
	I0815 18:57:59.013612   76334 main.go:141] libmachine: (kindnet-443473)   <devices>
	I0815 18:57:59.013632   76334 main.go:141] libmachine: (kindnet-443473)     <disk type='file' device='cdrom'>
	I0815 18:57:59.013650   76334 main.go:141] libmachine: (kindnet-443473)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473/boot2docker.iso'/>
	I0815 18:57:59.013663   76334 main.go:141] libmachine: (kindnet-443473)       <target dev='hdc' bus='scsi'/>
	I0815 18:57:59.013676   76334 main.go:141] libmachine: (kindnet-443473)       <readonly/>
	I0815 18:57:59.013685   76334 main.go:141] libmachine: (kindnet-443473)     </disk>
	I0815 18:57:59.013695   76334 main.go:141] libmachine: (kindnet-443473)     <disk type='file' device='disk'>
	I0815 18:57:59.013711   76334 main.go:141] libmachine: (kindnet-443473)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 18:57:59.013728   76334 main.go:141] libmachine: (kindnet-443473)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473/kindnet-443473.rawdisk'/>
	I0815 18:57:59.013740   76334 main.go:141] libmachine: (kindnet-443473)       <target dev='hda' bus='virtio'/>
	I0815 18:57:59.013752   76334 main.go:141] libmachine: (kindnet-443473)     </disk>
	I0815 18:57:59.013761   76334 main.go:141] libmachine: (kindnet-443473)     <interface type='network'>
	I0815 18:57:59.013774   76334 main.go:141] libmachine: (kindnet-443473)       <source network='mk-kindnet-443473'/>
	I0815 18:57:59.013795   76334 main.go:141] libmachine: (kindnet-443473)       <model type='virtio'/>
	I0815 18:57:59.013808   76334 main.go:141] libmachine: (kindnet-443473)     </interface>
	I0815 18:57:59.013820   76334 main.go:141] libmachine: (kindnet-443473)     <interface type='network'>
	I0815 18:57:59.013832   76334 main.go:141] libmachine: (kindnet-443473)       <source network='default'/>
	I0815 18:57:59.013843   76334 main.go:141] libmachine: (kindnet-443473)       <model type='virtio'/>
	I0815 18:57:59.013885   76334 main.go:141] libmachine: (kindnet-443473)     </interface>
	I0815 18:57:59.013905   76334 main.go:141] libmachine: (kindnet-443473)     <serial type='pty'>
	I0815 18:57:59.013915   76334 main.go:141] libmachine: (kindnet-443473)       <target port='0'/>
	I0815 18:57:59.013922   76334 main.go:141] libmachine: (kindnet-443473)     </serial>
	I0815 18:57:59.013935   76334 main.go:141] libmachine: (kindnet-443473)     <console type='pty'>
	I0815 18:57:59.013960   76334 main.go:141] libmachine: (kindnet-443473)       <target type='serial' port='0'/>
	I0815 18:57:59.013972   76334 main.go:141] libmachine: (kindnet-443473)     </console>
	I0815 18:57:59.013982   76334 main.go:141] libmachine: (kindnet-443473)     <rng model='virtio'>
	I0815 18:57:59.013992   76334 main.go:141] libmachine: (kindnet-443473)       <backend model='random'>/dev/random</backend>
	I0815 18:57:59.014006   76334 main.go:141] libmachine: (kindnet-443473)     </rng>
	I0815 18:57:59.014017   76334 main.go:141] libmachine: (kindnet-443473)     
	I0815 18:57:59.014026   76334 main.go:141] libmachine: (kindnet-443473)     
	I0815 18:57:59.014034   76334 main.go:141] libmachine: (kindnet-443473)   </devices>
	I0815 18:57:59.014043   76334 main.go:141] libmachine: (kindnet-443473) </domain>
	I0815 18:57:59.014054   76334 main.go:141] libmachine: (kindnet-443473) 
	I0815 18:57:59.018433   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:61:8b:4a in network default
	I0815 18:57:59.019188   76334 main.go:141] libmachine: (kindnet-443473) Ensuring networks are active...
	I0815 18:57:59.019226   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:57:59.020093   76334 main.go:141] libmachine: (kindnet-443473) Ensuring network default is active
	I0815 18:57:59.020576   76334 main.go:141] libmachine: (kindnet-443473) Ensuring network mk-kindnet-443473 is active
	I0815 18:57:59.021207   76334 main.go:141] libmachine: (kindnet-443473) Getting domain xml...
	I0815 18:57:59.022115   76334 main.go:141] libmachine: (kindnet-443473) Creating domain...
	I0815 18:58:00.292183   76334 main.go:141] libmachine: (kindnet-443473) Waiting to get IP...
	I0815 18:58:00.292926   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:00.293333   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:00.293355   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:00.293314   76672 retry.go:31] will retry after 207.060932ms: waiting for machine to come up
	I0815 18:58:00.501801   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:00.502361   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:00.502398   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:00.502311   76672 retry.go:31] will retry after 328.787936ms: waiting for machine to come up
	I0815 18:58:00.833041   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:00.833632   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:00.833671   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:00.833596   76672 retry.go:31] will retry after 354.243889ms: waiting for machine to come up
	I0815 18:58:00.055466   76153 main.go:141] libmachine: (auto-443473) Calling .GetIP
	I0815 18:58:00.058513   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:58:00.058991   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:58:00.059017   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:58:00.059245   76153 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:58:00.065829   76153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:58:00.086557   76153 kubeadm.go:883] updating cluster {Name:auto-443473 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:auto-443473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:58:00.086652   76153 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:58:00.086698   76153 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:58:00.130754   76153 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:58:00.130831   76153 ssh_runner.go:195] Run: which lz4
	I0815 18:58:00.135285   76153 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:58:00.139770   76153 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:58:00.139797   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:58:01.608306   76153 crio.go:462] duration metric: took 1.47304722s to copy over tarball
	I0815 18:58:01.608378   76153 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:58:03.863829   76153 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.255417013s)
	I0815 18:58:03.863862   76153 crio.go:469] duration metric: took 2.255528506s to extract the tarball
	I0815 18:58:03.863871   76153 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:58:03.900313   76153 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:58:03.944482   76153 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:58:03.944522   76153 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:58:03.944531   76153 kubeadm.go:934] updating node { 192.168.39.187 8443 v1.31.0 crio true true} ...
	I0815 18:58:03.944659   76153 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-443473 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.187
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:auto-443473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:58:03.944745   76153 ssh_runner.go:195] Run: crio config
	I0815 18:58:03.991292   76153 cni.go:84] Creating CNI manager for ""
	I0815 18:58:03.991310   76153 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:58:03.991319   76153 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:58:03.991342   76153 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.187 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-443473 NodeName:auto-443473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.187"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.187 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:58:03.991486   76153 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.187
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-443473"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.187
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.187"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:58:03.991552   76153 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:58:04.001754   76153 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:58:04.001814   76153 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:58:04.011484   76153 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0815 18:58:04.028850   76153 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:58:04.045782   76153 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0815 18:58:04.062123   76153 ssh_runner.go:195] Run: grep 192.168.39.187	control-plane.minikube.internal$ /etc/hosts
	I0815 18:58:04.066049   76153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.187	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:58:04.077857   76153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:58:04.203629   76153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:58:04.221642   76153 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473 for IP: 192.168.39.187
	I0815 18:58:04.221667   76153 certs.go:194] generating shared ca certs ...
	I0815 18:58:04.221686   76153 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:04.221864   76153 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:58:04.221931   76153 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:58:04.221946   76153 certs.go:256] generating profile certs ...
	I0815 18:58:04.222015   76153 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/client.key
	I0815 18:58:04.222036   76153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/client.crt with IP's: []
	I0815 18:58:04.300677   76153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/client.crt ...
	I0815 18:58:04.300710   76153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/client.crt: {Name:mkb58658a5ea8eb298d1fbdfbc1fffb016ce25ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:04.300885   76153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/client.key ...
	I0815 18:58:04.300898   76153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/client.key: {Name:mkd88087abb8ec9f8f063c114f990a27bf6fde7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:04.301004   76153 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/apiserver.key.94ee6dc6
	I0815 18:58:04.301025   76153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/apiserver.crt.94ee6dc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.187]
	I0815 18:58:04.701135   76153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/apiserver.crt.94ee6dc6 ...
	I0815 18:58:04.701171   76153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/apiserver.crt.94ee6dc6: {Name:mk70232aeab7916fe2259dcad86d859638910e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:04.701363   76153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/apiserver.key.94ee6dc6 ...
	I0815 18:58:04.701379   76153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/apiserver.key.94ee6dc6: {Name:mkafb03ba7b9b28375ee103722dd150352b6564c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:04.701480   76153 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/apiserver.crt.94ee6dc6 -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/apiserver.crt
	I0815 18:58:04.701694   76153 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/apiserver.key.94ee6dc6 -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/apiserver.key
	I0815 18:58:04.701833   76153 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/proxy-client.key
	I0815 18:58:04.701856   76153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/proxy-client.crt with IP's: []
	I0815 18:58:04.778004   76153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/proxy-client.crt ...
	I0815 18:58:04.778035   76153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/proxy-client.crt: {Name:mk23a33abadb28b90c07b653443d0328cfb844e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:04.778220   76153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/proxy-client.key ...
	I0815 18:58:04.778235   76153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/proxy-client.key: {Name:mk836d474002afc1f23d3d46807aba2cfcc5b7cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:04.778438   76153 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:58:04.778484   76153 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:58:04.778494   76153 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:58:04.778536   76153 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:58:04.778570   76153 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:58:04.778605   76153 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:58:04.778659   76153 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:58:04.779316   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:58:04.805492   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:58:04.831125   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:58:04.863144   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:58:04.893248   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0815 18:58:04.922047   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:58:04.951618   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:58:04.975884   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:58:05.001231   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:58:05.030459   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:58:01.189291   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:01.189902   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:01.189933   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:01.189868   76672 retry.go:31] will retry after 403.193034ms: waiting for machine to come up
	I0815 18:58:01.594398   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:01.594992   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:01.595019   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:01.594944   76672 retry.go:31] will retry after 650.390838ms: waiting for machine to come up
	I0815 18:58:02.246651   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:02.247202   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:02.247232   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:02.247132   76672 retry.go:31] will retry after 856.310197ms: waiting for machine to come up
	I0815 18:58:03.104526   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:03.105070   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:03.105102   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:03.105004   76672 retry.go:31] will retry after 1.072591302s: waiting for machine to come up
	I0815 18:58:04.179561   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:04.179996   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:04.180022   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:04.179957   76672 retry.go:31] will retry after 1.418877848s: waiting for machine to come up
	I0815 18:58:05.600189   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:05.600693   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:05.600716   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:05.600656   76672 retry.go:31] will retry after 1.393638663s: waiting for machine to come up
	I0815 18:58:05.058913   76153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:58:05.082734   76153 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:58:05.100246   76153 ssh_runner.go:195] Run: openssl version
	I0815 18:58:05.106023   76153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:58:05.117058   76153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:58:05.123002   76153 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:58:05.123070   76153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:58:05.129660   76153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:58:05.140833   76153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:58:05.153224   76153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:58:05.157652   76153 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:58:05.157710   76153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:58:05.163218   76153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:58:05.174133   76153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:58:05.184861   76153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:58:05.189110   76153 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:58:05.189154   76153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:58:05.194740   76153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:58:05.205601   76153 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:58:05.209672   76153 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 18:58:05.209719   76153 kubeadm.go:392] StartCluster: {Name:auto-443473 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 Clu
sterName:auto-443473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:58:05.209793   76153 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:58:05.209839   76153 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:58:05.258714   76153 cri.go:89] found id: ""
	I0815 18:58:05.258821   76153 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:58:05.269637   76153 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:58:05.280471   76153 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:58:05.293743   76153 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:58:05.293761   76153 kubeadm.go:157] found existing configuration files:
	
	I0815 18:58:05.293809   76153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:58:05.306347   76153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:58:05.306417   76153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:58:05.319378   76153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:58:05.331690   76153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:58:05.331744   76153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:58:05.344425   76153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:58:05.354593   76153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:58:05.354648   76153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:58:05.364215   76153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:58:05.374162   76153 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:58:05.374226   76153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:58:05.386077   76153 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:58:05.440044   76153 kubeadm.go:310] W0815 18:58:05.422412     851 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:58:05.441072   76153 kubeadm.go:310] W0815 18:58:05.423815     851 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:58:05.545004   76153 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:58:06.996273   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:06.996707   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:06.996736   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:06.996667   76672 retry.go:31] will retry after 1.481368962s: waiting for machine to come up
	I0815 18:58:08.480560   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:08.481141   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:08.481171   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:08.481089   76672 retry.go:31] will retry after 2.856036989s: waiting for machine to come up
	I0815 18:58:15.900858   76153 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 18:58:15.900945   76153 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:58:15.901036   76153 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:58:15.901165   76153 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:58:15.901264   76153 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 18:58:15.901324   76153 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:58:15.903096   76153 out.go:235]   - Generating certificates and keys ...
	I0815 18:58:15.903193   76153 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:58:15.903279   76153 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:58:15.903365   76153 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 18:58:15.903465   76153 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 18:58:15.903549   76153 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 18:58:15.903616   76153 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 18:58:15.903686   76153 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 18:58:15.903836   76153 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-443473 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I0815 18:58:15.903912   76153 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 18:58:15.904066   76153 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-443473 localhost] and IPs [192.168.39.187 127.0.0.1 ::1]
	I0815 18:58:15.904162   76153 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 18:58:15.904255   76153 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 18:58:15.904315   76153 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 18:58:15.904417   76153 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:58:15.904521   76153 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:58:15.904607   76153 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 18:58:15.904687   76153 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:58:15.904776   76153 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:58:15.904861   76153 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:58:15.904972   76153 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:58:15.905064   76153 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:58:15.906576   76153 out.go:235]   - Booting up control plane ...
	I0815 18:58:15.906668   76153 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:58:15.906745   76153 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:58:15.906811   76153 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:58:15.906901   76153 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:58:15.906991   76153 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:58:15.907061   76153 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:58:15.907213   76153 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 18:58:15.907327   76153 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 18:58:15.907416   76153 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002605224s
	I0815 18:58:15.907530   76153 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 18:58:15.907617   76153 kubeadm.go:310] [api-check] The API server is healthy after 5.002231149s
	I0815 18:58:15.907741   76153 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 18:58:15.907875   76153 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 18:58:15.907931   76153 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 18:58:15.908132   76153 kubeadm.go:310] [mark-control-plane] Marking the node auto-443473 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 18:58:15.908209   76153 kubeadm.go:310] [bootstrap-token] Using token: 3xcpha.ma3velq0ptoz0yc3
	I0815 18:58:15.909829   76153 out.go:235]   - Configuring RBAC rules ...
	I0815 18:58:15.909939   76153 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 18:58:15.910013   76153 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 18:58:15.910132   76153 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 18:58:15.910250   76153 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 18:58:15.910397   76153 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 18:58:15.910525   76153 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 18:58:15.910682   76153 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 18:58:15.910737   76153 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 18:58:15.910779   76153 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 18:58:15.910785   76153 kubeadm.go:310] 
	I0815 18:58:15.910833   76153 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 18:58:15.910839   76153 kubeadm.go:310] 
	I0815 18:58:15.910918   76153 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 18:58:15.910925   76153 kubeadm.go:310] 
	I0815 18:58:15.910945   76153 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 18:58:15.911002   76153 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 18:58:15.911045   76153 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 18:58:15.911050   76153 kubeadm.go:310] 
	I0815 18:58:15.911097   76153 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 18:58:15.911103   76153 kubeadm.go:310] 
	I0815 18:58:15.911147   76153 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 18:58:15.911153   76153 kubeadm.go:310] 
	I0815 18:58:15.911194   76153 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 18:58:15.911262   76153 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 18:58:15.911318   76153 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 18:58:15.911324   76153 kubeadm.go:310] 
	I0815 18:58:15.911398   76153 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 18:58:15.911470   76153 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 18:58:15.911491   76153 kubeadm.go:310] 
	I0815 18:58:15.911610   76153 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3xcpha.ma3velq0ptoz0yc3 \
	I0815 18:58:15.911721   76153 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 \
	I0815 18:58:15.911742   76153 kubeadm.go:310] 	--control-plane 
	I0815 18:58:15.911748   76153 kubeadm.go:310] 
	I0815 18:58:15.911816   76153 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 18:58:15.911821   76153 kubeadm.go:310] 
	I0815 18:58:15.911892   76153 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3xcpha.ma3velq0ptoz0yc3 \
	I0815 18:58:15.911988   76153 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 
	I0815 18:58:15.912001   76153 cni.go:84] Creating CNI manager for ""
	I0815 18:58:15.912008   76153 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:58:15.913768   76153 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:58:11.338747   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:11.339249   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:11.339274   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:11.339204   76672 retry.go:31] will retry after 2.461101752s: waiting for machine to come up
	I0815 18:58:13.801773   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:13.802280   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:13.802309   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:13.802230   76672 retry.go:31] will retry after 4.524265591s: waiting for machine to come up
	I0815 18:58:15.915193   76153 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:58:15.930134   76153 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:58:15.948179   76153 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:58:15.948267   76153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:15.948286   76153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-443473 minikube.k8s.io/updated_at=2024_08_15T18_58_15_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=auto-443473 minikube.k8s.io/primary=true
	I0815 18:58:16.072859   76153 ops.go:34] apiserver oom_adj: -16
	I0815 18:58:16.072946   76153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:16.573820   76153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:17.073537   76153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:17.573723   76153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:18.073107   76153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:18.573537   76153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:19.073739   76153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:19.573050   76153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:20.073492   76153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:20.210513   76153 kubeadm.go:1113] duration metric: took 4.262315287s to wait for elevateKubeSystemPrivileges
	I0815 18:58:20.210555   76153 kubeadm.go:394] duration metric: took 15.000838417s to StartCluster
	I0815 18:58:20.210652   76153 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:20.210755   76153 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:58:20.212347   76153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:20.212641   76153 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.187 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:58:20.212706   76153 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:58:20.212681   76153 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 18:58:20.212800   76153 addons.go:69] Setting storage-provisioner=true in profile "auto-443473"
	I0815 18:58:20.212811   76153 addons.go:69] Setting default-storageclass=true in profile "auto-443473"
	I0815 18:58:20.212844   76153 addons.go:234] Setting addon storage-provisioner=true in "auto-443473"
	I0815 18:58:20.212876   76153 host.go:66] Checking if "auto-443473" exists ...
	I0815 18:58:20.212882   76153 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-443473"
	I0815 18:58:20.213232   76153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:58:20.213258   76153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:58:20.213290   76153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:58:20.213324   76153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:58:20.213529   76153 config.go:182] Loaded profile config "auto-443473": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:58:20.214008   76153 out.go:177] * Verifying Kubernetes components...
	I0815 18:58:20.215285   76153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:58:20.229305   76153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43423
	I0815 18:58:20.229836   76153 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:58:20.230421   76153 main.go:141] libmachine: Using API Version  1
	I0815 18:58:20.230446   76153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:58:20.230451   76153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43447
	I0815 18:58:20.230786   76153 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:58:20.230837   76153 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:58:20.231397   76153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:58:20.231420   76153 main.go:141] libmachine: Using API Version  1
	I0815 18:58:20.231438   76153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:58:20.231502   76153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:58:20.231842   76153 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:58:20.232057   76153 main.go:141] libmachine: (auto-443473) Calling .GetState
	I0815 18:58:20.235616   76153 addons.go:234] Setting addon default-storageclass=true in "auto-443473"
	I0815 18:58:20.235657   76153 host.go:66] Checking if "auto-443473" exists ...
	I0815 18:58:20.236038   76153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:58:20.236074   76153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:58:20.247977   76153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46251
	I0815 18:58:20.248574   76153 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:58:20.249096   76153 main.go:141] libmachine: Using API Version  1
	I0815 18:58:20.249117   76153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:58:20.249421   76153 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:58:20.249603   76153 main.go:141] libmachine: (auto-443473) Calling .GetState
	I0815 18:58:20.251178   76153 main.go:141] libmachine: (auto-443473) Calling .DriverName
	I0815 18:58:20.251352   76153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33005
	I0815 18:58:20.251728   76153 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:58:20.252186   76153 main.go:141] libmachine: Using API Version  1
	I0815 18:58:20.252199   76153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:58:20.252642   76153 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:58:20.252822   76153 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:58:18.329879   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:18.330331   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find current IP address of domain kindnet-443473 in network mk-kindnet-443473
	I0815 18:58:18.330353   76334 main.go:141] libmachine: (kindnet-443473) DBG | I0815 18:58:18.330264   76672 retry.go:31] will retry after 5.018488986s: waiting for machine to come up
	I0815 18:58:20.253167   76153 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:58:20.253199   76153 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:58:20.254002   76153 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:58:20.254022   76153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:58:20.254040   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHHostname
	I0815 18:58:20.257581   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:58:20.257970   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:58:20.258011   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:58:20.258226   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHPort
	I0815 18:58:20.258385   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:58:20.258524   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHUsername
	I0815 18:58:20.258666   76153 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473/id_rsa Username:docker}
	I0815 18:58:20.269085   76153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41059
	I0815 18:58:20.269557   76153 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:58:20.270023   76153 main.go:141] libmachine: Using API Version  1
	I0815 18:58:20.270044   76153 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:58:20.270375   76153 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:58:20.270553   76153 main.go:141] libmachine: (auto-443473) Calling .GetState
	I0815 18:58:20.272236   76153 main.go:141] libmachine: (auto-443473) Calling .DriverName
	I0815 18:58:20.272466   76153 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:58:20.272482   76153 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:58:20.272514   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHHostname
	I0815 18:58:20.275366   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:58:20.275803   76153 main.go:141] libmachine: (auto-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:88:11", ip: ""} in network mk-auto-443473: {Iface:virbr1 ExpiryTime:2024-08-15 19:57:50 +0000 UTC Type:0 Mac:52:54:00:c6:88:11 Iaid: IPaddr:192.168.39.187 Prefix:24 Hostname:auto-443473 Clientid:01:52:54:00:c6:88:11}
	I0815 18:58:20.275827   76153 main.go:141] libmachine: (auto-443473) DBG | domain auto-443473 has defined IP address 192.168.39.187 and MAC address 52:54:00:c6:88:11 in network mk-auto-443473
	I0815 18:58:20.275973   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHPort
	I0815 18:58:20.276165   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHKeyPath
	I0815 18:58:20.276302   76153 main.go:141] libmachine: (auto-443473) Calling .GetSSHUsername
	I0815 18:58:20.276591   76153 sshutil.go:53] new ssh client: &{IP:192.168.39.187 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/auto-443473/id_rsa Username:docker}
	I0815 18:58:20.371157   76153 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 18:58:20.427577   76153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:58:20.627721   76153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:58:20.670393   76153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:58:21.053056   76153 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0815 18:58:21.053977   76153 node_ready.go:35] waiting up to 15m0s for node "auto-443473" to be "Ready" ...
	I0815 18:58:21.072690   76153 node_ready.go:49] node "auto-443473" has status "Ready":"True"
	I0815 18:58:21.072723   76153 node_ready.go:38] duration metric: took 18.721493ms for node "auto-443473" to be "Ready" ...
	I0815 18:58:21.072749   76153 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:58:21.088812   76153 pod_ready.go:79] waiting up to 15m0s for pod "coredns-6f6b679f8f-bq7p8" in "kube-system" namespace to be "Ready" ...
	I0815 18:58:21.557531   76153 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-443473" context rescaled to 1 replicas
	I0815 18:58:21.571051   76153 main.go:141] libmachine: Making call to close driver server
	I0815 18:58:21.571077   76153 main.go:141] libmachine: (auto-443473) Calling .Close
	I0815 18:58:21.571051   76153 main.go:141] libmachine: Making call to close driver server
	I0815 18:58:21.571148   76153 main.go:141] libmachine: (auto-443473) Calling .Close
	I0815 18:58:21.571453   76153 main.go:141] libmachine: (auto-443473) DBG | Closing plugin on server side
	I0815 18:58:21.571496   76153 main.go:141] libmachine: (auto-443473) DBG | Closing plugin on server side
	I0815 18:58:21.571500   76153 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:58:21.571520   76153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:58:21.571526   76153 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:58:21.571555   76153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:58:21.571579   76153 main.go:141] libmachine: Making call to close driver server
	I0815 18:58:21.571589   76153 main.go:141] libmachine: (auto-443473) Calling .Close
	I0815 18:58:21.571531   76153 main.go:141] libmachine: Making call to close driver server
	I0815 18:58:21.571658   76153 main.go:141] libmachine: (auto-443473) Calling .Close
	I0815 18:58:21.571820   76153 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:58:21.571832   76153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:58:21.572623   76153 main.go:141] libmachine: (auto-443473) DBG | Closing plugin on server side
	I0815 18:58:21.572651   76153 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:58:21.572668   76153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:58:21.589884   76153 main.go:141] libmachine: Making call to close driver server
	I0815 18:58:21.589907   76153 main.go:141] libmachine: (auto-443473) Calling .Close
	I0815 18:58:21.590187   76153 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:58:21.590210   76153 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:58:21.590226   76153 main.go:141] libmachine: (auto-443473) DBG | Closing plugin on server side
	I0815 18:58:21.592277   76153 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0815 18:58:24.849302   76538 start.go:364] duration metric: took 46.267592205s to acquireMachinesLock for "calico-443473"
	I0815 18:58:24.849377   76538 start.go:93] Provisioning new machine with config: &{Name:calico-443473 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:calico-443473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:58:24.849517   76538 start.go:125] createHost starting for "" (driver="kvm2")
	I0815 18:58:21.594014   76153 addons.go:510] duration metric: took 1.381306997s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0815 18:58:23.095561   76153 pod_ready.go:103] pod "coredns-6f6b679f8f-bq7p8" in "kube-system" namespace has status "Ready":"False"
	I0815 18:58:23.353642   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:23.354099   76334 main.go:141] libmachine: (kindnet-443473) Found IP for machine: 192.168.50.168
	I0815 18:58:23.354121   76334 main.go:141] libmachine: (kindnet-443473) Reserving static IP address...
	I0815 18:58:23.354135   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has current primary IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:23.354611   76334 main.go:141] libmachine: (kindnet-443473) DBG | unable to find host DHCP lease matching {name: "kindnet-443473", mac: "52:54:00:32:87:4d", ip: "192.168.50.168"} in network mk-kindnet-443473
	I0815 18:58:23.428512   76334 main.go:141] libmachine: (kindnet-443473) DBG | Getting to WaitForSSH function...
	I0815 18:58:23.428543   76334 main.go:141] libmachine: (kindnet-443473) Reserved static IP address: 192.168.50.168
	I0815 18:58:23.428565   76334 main.go:141] libmachine: (kindnet-443473) Waiting for SSH to be available...
	I0815 18:58:23.431367   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:23.431885   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:minikube Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:23.431930   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:23.432055   76334 main.go:141] libmachine: (kindnet-443473) DBG | Using SSH client type: external
	I0815 18:58:23.432097   76334 main.go:141] libmachine: (kindnet-443473) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473/id_rsa (-rw-------)
	I0815 18:58:23.432134   76334 main.go:141] libmachine: (kindnet-443473) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.168 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:58:23.432150   76334 main.go:141] libmachine: (kindnet-443473) DBG | About to run SSH command:
	I0815 18:58:23.432166   76334 main.go:141] libmachine: (kindnet-443473) DBG | exit 0
	I0815 18:58:23.556537   76334 main.go:141] libmachine: (kindnet-443473) DBG | SSH cmd err, output: <nil>: 
	I0815 18:58:23.556782   76334 main.go:141] libmachine: (kindnet-443473) KVM machine creation complete!
	I0815 18:58:23.557086   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetConfigRaw
	I0815 18:58:23.557594   76334 main.go:141] libmachine: (kindnet-443473) Calling .DriverName
	I0815 18:58:23.557795   76334 main.go:141] libmachine: (kindnet-443473) Calling .DriverName
	I0815 18:58:23.557914   76334 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 18:58:23.557924   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetState
	I0815 18:58:23.559242   76334 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 18:58:23.559253   76334 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 18:58:23.559258   76334 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 18:58:23.559264   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHHostname
	I0815 18:58:23.561734   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:23.562065   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:23.562112   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:23.562247   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHPort
	I0815 18:58:23.562450   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:23.562604   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:23.562750   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHUsername
	I0815 18:58:23.562921   76334 main.go:141] libmachine: Using SSH client type: native
	I0815 18:58:23.563097   76334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0815 18:58:23.563108   76334 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 18:58:23.675560   76334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:58:23.675581   76334 main.go:141] libmachine: Detecting the provisioner...
	I0815 18:58:23.675589   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHHostname
	I0815 18:58:23.678392   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:23.678703   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:23.678733   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:23.678833   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHPort
	I0815 18:58:23.679011   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:23.679186   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:23.679313   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHUsername
	I0815 18:58:23.679484   76334 main.go:141] libmachine: Using SSH client type: native
	I0815 18:58:23.679707   76334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0815 18:58:23.679721   76334 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 18:58:23.789115   76334 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 18:58:23.789194   76334 main.go:141] libmachine: found compatible host: buildroot
	I0815 18:58:23.789207   76334 main.go:141] libmachine: Provisioning with buildroot...
	I0815 18:58:23.789219   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetMachineName
	I0815 18:58:23.789480   76334 buildroot.go:166] provisioning hostname "kindnet-443473"
	I0815 18:58:23.789508   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetMachineName
	I0815 18:58:23.789655   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHHostname
	I0815 18:58:23.792203   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:23.792546   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:23.792573   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:23.792693   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHPort
	I0815 18:58:23.792900   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:23.793081   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:23.793282   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHUsername
	I0815 18:58:23.793461   76334 main.go:141] libmachine: Using SSH client type: native
	I0815 18:58:23.793625   76334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0815 18:58:23.793639   76334 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-443473 && echo "kindnet-443473" | sudo tee /etc/hostname
	I0815 18:58:23.916243   76334 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-443473
	
	I0815 18:58:23.916272   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHHostname
	I0815 18:58:23.919297   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:23.919636   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:23.919666   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:23.919904   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHPort
	I0815 18:58:23.920107   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:23.920263   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:23.920405   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHUsername
	I0815 18:58:23.920577   76334 main.go:141] libmachine: Using SSH client type: native
	I0815 18:58:23.920763   76334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0815 18:58:23.920786   76334 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-443473' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-443473/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-443473' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:58:24.039936   76334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:58:24.039963   76334 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:58:24.039985   76334 buildroot.go:174] setting up certificates
	I0815 18:58:24.039997   76334 provision.go:84] configureAuth start
	I0815 18:58:24.040005   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetMachineName
	I0815 18:58:24.040298   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetIP
	I0815 18:58:24.042987   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.043372   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:24.043403   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.043524   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHHostname
	I0815 18:58:24.045777   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.046102   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:24.046126   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.046229   76334 provision.go:143] copyHostCerts
	I0815 18:58:24.046293   76334 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:58:24.046320   76334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:58:24.046403   76334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:58:24.046545   76334 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:58:24.046558   76334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:58:24.046592   76334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:58:24.046675   76334 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:58:24.046686   76334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:58:24.046712   76334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:58:24.046778   76334 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.kindnet-443473 san=[127.0.0.1 192.168.50.168 kindnet-443473 localhost minikube]
	I0815 18:58:24.167041   76334 provision.go:177] copyRemoteCerts
	I0815 18:58:24.167116   76334 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:58:24.167146   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHHostname
	I0815 18:58:24.169947   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.170315   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:24.170354   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.170519   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHPort
	I0815 18:58:24.170711   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:24.170854   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHUsername
	I0815 18:58:24.170975   76334 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473/id_rsa Username:docker}
	I0815 18:58:24.254768   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:58:24.279592   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0815 18:58:24.303819   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 18:58:24.327156   76334 provision.go:87] duration metric: took 287.149105ms to configureAuth
	I0815 18:58:24.327182   76334 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:58:24.327371   76334 config.go:182] Loaded profile config "kindnet-443473": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:58:24.327447   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHHostname
	I0815 18:58:24.330353   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.330683   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:24.330712   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.330885   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHPort
	I0815 18:58:24.331071   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:24.331320   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:24.331470   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHUsername
	I0815 18:58:24.331647   76334 main.go:141] libmachine: Using SSH client type: native
	I0815 18:58:24.331859   76334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0815 18:58:24.331884   76334 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:58:24.599023   76334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:58:24.599047   76334 main.go:141] libmachine: Checking connection to Docker...
	I0815 18:58:24.599055   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetURL
	I0815 18:58:24.600259   76334 main.go:141] libmachine: (kindnet-443473) DBG | Using libvirt version 6000000
	I0815 18:58:24.602852   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.603167   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:24.603197   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.603355   76334 main.go:141] libmachine: Docker is up and running!
	I0815 18:58:24.603373   76334 main.go:141] libmachine: Reticulating splines...
	I0815 18:58:24.603381   76334 client.go:171] duration metric: took 26.041641477s to LocalClient.Create
	I0815 18:58:24.603405   76334 start.go:167] duration metric: took 26.041702408s to libmachine.API.Create "kindnet-443473"
	I0815 18:58:24.603413   76334 start.go:293] postStartSetup for "kindnet-443473" (driver="kvm2")
	I0815 18:58:24.603424   76334 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:58:24.603441   76334 main.go:141] libmachine: (kindnet-443473) Calling .DriverName
	I0815 18:58:24.603660   76334 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:58:24.603682   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHHostname
	I0815 18:58:24.605518   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.605814   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:24.605850   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.605959   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHPort
	I0815 18:58:24.606124   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:24.606287   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHUsername
	I0815 18:58:24.606428   76334 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473/id_rsa Username:docker}
	I0815 18:58:24.695178   76334 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:58:24.699396   76334 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:58:24.699417   76334 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:58:24.699477   76334 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:58:24.699548   76334 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:58:24.699675   76334 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:58:24.709077   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:58:24.732880   76334 start.go:296] duration metric: took 129.455082ms for postStartSetup
	I0815 18:58:24.732930   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetConfigRaw
	I0815 18:58:24.733491   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetIP
	I0815 18:58:24.736227   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.736596   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:24.736623   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.736841   76334 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/config.json ...
	I0815 18:58:24.737035   76334 start.go:128] duration metric: took 26.195542208s to createHost
	I0815 18:58:24.737055   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHHostname
	I0815 18:58:24.739114   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.739421   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:24.739449   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.739572   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHPort
	I0815 18:58:24.739770   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:24.739911   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:24.740047   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHUsername
	I0815 18:58:24.740224   76334 main.go:141] libmachine: Using SSH client type: native
	I0815 18:58:24.740396   76334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.168 22 <nil> <nil>}
	I0815 18:58:24.740406   76334 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:58:24.849147   76334 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723748304.828685583
	
	I0815 18:58:24.849173   76334 fix.go:216] guest clock: 1723748304.828685583
	I0815 18:58:24.849183   76334 fix.go:229] Guest: 2024-08-15 18:58:24.828685583 +0000 UTC Remote: 2024-08-15 18:58:24.737045749 +0000 UTC m=+48.770269264 (delta=91.639834ms)
	I0815 18:58:24.849214   76334 fix.go:200] guest clock delta is within tolerance: 91.639834ms
	I0815 18:58:24.849219   76334 start.go:83] releasing machines lock for "kindnet-443473", held for 26.307901059s
	I0815 18:58:24.849242   76334 main.go:141] libmachine: (kindnet-443473) Calling .DriverName
	I0815 18:58:24.849516   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetIP
	I0815 18:58:24.852225   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.852533   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:24.852557   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.852700   76334 main.go:141] libmachine: (kindnet-443473) Calling .DriverName
	I0815 18:58:24.853361   76334 main.go:141] libmachine: (kindnet-443473) Calling .DriverName
	I0815 18:58:24.853536   76334 main.go:141] libmachine: (kindnet-443473) Calling .DriverName
	I0815 18:58:24.853639   76334 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:58:24.853682   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHHostname
	I0815 18:58:24.853812   76334 ssh_runner.go:195] Run: cat /version.json
	I0815 18:58:24.853836   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHHostname
	I0815 18:58:24.856618   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.856822   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.856962   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:24.856988   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.857178   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHPort
	I0815 18:58:24.857296   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:24.857317   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:24.857362   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:24.857450   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHPort
	I0815 18:58:24.857539   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHUsername
	I0815 18:58:24.857607   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:24.857679   76334 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473/id_rsa Username:docker}
	I0815 18:58:24.857758   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHUsername
	I0815 18:58:24.857903   76334 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473/id_rsa Username:docker}
	I0815 18:58:24.963703   76334 ssh_runner.go:195] Run: systemctl --version
	I0815 18:58:24.972479   76334 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:58:25.138899   76334 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:58:25.146649   76334 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:58:25.146738   76334 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:58:25.163933   76334 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:58:25.163963   76334 start.go:495] detecting cgroup driver to use...
	I0815 18:58:25.164033   76334 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:58:25.183732   76334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:58:25.198002   76334 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:58:25.198088   76334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:58:25.213249   76334 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:58:25.226552   76334 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:58:25.343565   76334 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:58:25.508136   76334 docker.go:233] disabling docker service ...
	I0815 18:58:25.508198   76334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:58:25.523048   76334 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:58:25.535962   76334 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:58:25.651242   76334 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:58:25.778768   76334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:58:25.793867   76334 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:58:25.813110   76334 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:58:25.813176   76334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:58:25.824161   76334 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:58:25.824245   76334 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:58:25.837823   76334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:58:25.850775   76334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:58:25.863180   76334 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:58:25.874402   76334 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:58:25.885033   76334 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:58:25.905125   76334 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:58:25.917549   76334 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:58:25.927410   76334 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:58:25.927459   76334 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:58:25.941204   76334 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:58:25.952612   76334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:58:26.082017   76334 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:58:26.256934   76334 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:58:26.256998   76334 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:58:26.262103   76334 start.go:563] Will wait 60s for crictl version
	I0815 18:58:26.262175   76334 ssh_runner.go:195] Run: which crictl
	I0815 18:58:26.266101   76334 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:58:26.312664   76334 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:58:26.312758   76334 ssh_runner.go:195] Run: crio --version
	I0815 18:58:26.346760   76334 ssh_runner.go:195] Run: crio --version
	I0815 18:58:26.383367   76334 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:58:24.852709   76538 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0815 18:58:24.852917   76538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:58:24.853130   76538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:58:24.873546   76538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45859
	I0815 18:58:24.873927   76538 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:58:24.874470   76538 main.go:141] libmachine: Using API Version  1
	I0815 18:58:24.874495   76538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:58:24.874808   76538 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:58:24.875045   76538 main.go:141] libmachine: (calico-443473) Calling .GetMachineName
	I0815 18:58:24.875218   76538 main.go:141] libmachine: (calico-443473) Calling .DriverName
	I0815 18:58:24.875417   76538 start.go:159] libmachine.API.Create for "calico-443473" (driver="kvm2")
	I0815 18:58:24.875476   76538 client.go:168] LocalClient.Create starting
	I0815 18:58:24.875510   76538 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem
	I0815 18:58:24.875546   76538 main.go:141] libmachine: Decoding PEM data...
	I0815 18:58:24.875566   76538 main.go:141] libmachine: Parsing certificate...
	I0815 18:58:24.875631   76538 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem
	I0815 18:58:24.875663   76538 main.go:141] libmachine: Decoding PEM data...
	I0815 18:58:24.875679   76538 main.go:141] libmachine: Parsing certificate...
	I0815 18:58:24.875708   76538 main.go:141] libmachine: Running pre-create checks...
	I0815 18:58:24.875720   76538 main.go:141] libmachine: (calico-443473) Calling .PreCreateCheck
	I0815 18:58:24.876051   76538 main.go:141] libmachine: (calico-443473) Calling .GetConfigRaw
	I0815 18:58:24.876447   76538 main.go:141] libmachine: Creating machine...
	I0815 18:58:24.876459   76538 main.go:141] libmachine: (calico-443473) Calling .Create
	I0815 18:58:24.876592   76538 main.go:141] libmachine: (calico-443473) Creating KVM machine...
	I0815 18:58:24.877909   76538 main.go:141] libmachine: (calico-443473) DBG | found existing default KVM network
	I0815 18:58:24.879558   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:24.879387   76933 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ab:d6:46} reservation:<nil>}
	I0815 18:58:24.880595   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:24.880461   76933 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:90:97:ae} reservation:<nil>}
	I0815 18:58:24.881351   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:24.881267   76933 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:4e:08:d1} reservation:<nil>}
	I0815 18:58:24.882539   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:24.882438   76933 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289ad0}
	I0815 18:58:24.882569   76538 main.go:141] libmachine: (calico-443473) DBG | created network xml: 
	I0815 18:58:24.882586   76538 main.go:141] libmachine: (calico-443473) DBG | <network>
	I0815 18:58:24.882604   76538 main.go:141] libmachine: (calico-443473) DBG |   <name>mk-calico-443473</name>
	I0815 18:58:24.882618   76538 main.go:141] libmachine: (calico-443473) DBG |   <dns enable='no'/>
	I0815 18:58:24.882625   76538 main.go:141] libmachine: (calico-443473) DBG |   
	I0815 18:58:24.882639   76538 main.go:141] libmachine: (calico-443473) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0815 18:58:24.882652   76538 main.go:141] libmachine: (calico-443473) DBG |     <dhcp>
	I0815 18:58:24.882662   76538 main.go:141] libmachine: (calico-443473) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0815 18:58:24.882673   76538 main.go:141] libmachine: (calico-443473) DBG |     </dhcp>
	I0815 18:58:24.882695   76538 main.go:141] libmachine: (calico-443473) DBG |   </ip>
	I0815 18:58:24.882717   76538 main.go:141] libmachine: (calico-443473) DBG |   
	I0815 18:58:24.882727   76538 main.go:141] libmachine: (calico-443473) DBG | </network>
	I0815 18:58:24.882738   76538 main.go:141] libmachine: (calico-443473) DBG | 
	I0815 18:58:24.888212   76538 main.go:141] libmachine: (calico-443473) DBG | trying to create private KVM network mk-calico-443473 192.168.72.0/24...
	I0815 18:58:24.955216   76538 main.go:141] libmachine: (calico-443473) DBG | private KVM network mk-calico-443473 192.168.72.0/24 created
	I0815 18:58:24.955248   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:24.955174   76933 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:58:24.955262   76538 main.go:141] libmachine: (calico-443473) Setting up store path in /home/jenkins/minikube-integration/19450-13013/.minikube/machines/calico-443473 ...
	I0815 18:58:24.955279   76538 main.go:141] libmachine: (calico-443473) Building disk image from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 18:58:24.955353   76538 main.go:141] libmachine: (calico-443473) Downloading /home/jenkins/minikube-integration/19450-13013/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso...
	I0815 18:58:25.199200   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:25.199101   76933 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/calico-443473/id_rsa...
	I0815 18:58:25.306857   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:25.306724   76933 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/calico-443473/calico-443473.rawdisk...
	I0815 18:58:25.306886   76538 main.go:141] libmachine: (calico-443473) DBG | Writing magic tar header
	I0815 18:58:25.306900   76538 main.go:141] libmachine: (calico-443473) DBG | Writing SSH key tar header
	I0815 18:58:25.306913   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:25.306858   76933 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/calico-443473 ...
	I0815 18:58:25.306987   76538 main.go:141] libmachine: (calico-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/calico-443473
	I0815 18:58:25.307027   76538 main.go:141] libmachine: (calico-443473) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines/calico-443473 (perms=drwx------)
	I0815 18:58:25.307048   76538 main.go:141] libmachine: (calico-443473) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube/machines (perms=drwxr-xr-x)
	I0815 18:58:25.307060   76538 main.go:141] libmachine: (calico-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube/machines
	I0815 18:58:25.307075   76538 main.go:141] libmachine: (calico-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:58:25.307086   76538 main.go:141] libmachine: (calico-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19450-13013
	I0815 18:58:25.307103   76538 main.go:141] libmachine: (calico-443473) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0815 18:58:25.307120   76538 main.go:141] libmachine: (calico-443473) DBG | Checking permissions on dir: /home/jenkins
	I0815 18:58:25.307133   76538 main.go:141] libmachine: (calico-443473) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013/.minikube (perms=drwxr-xr-x)
	I0815 18:58:25.307150   76538 main.go:141] libmachine: (calico-443473) Setting executable bit set on /home/jenkins/minikube-integration/19450-13013 (perms=drwxrwxr-x)
	I0815 18:58:25.307162   76538 main.go:141] libmachine: (calico-443473) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0815 18:58:25.307186   76538 main.go:141] libmachine: (calico-443473) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0815 18:58:25.307202   76538 main.go:141] libmachine: (calico-443473) DBG | Checking permissions on dir: /home
	I0815 18:58:25.307212   76538 main.go:141] libmachine: (calico-443473) Creating domain...
	I0815 18:58:25.307224   76538 main.go:141] libmachine: (calico-443473) DBG | Skipping /home - not owner
	I0815 18:58:25.308408   76538 main.go:141] libmachine: (calico-443473) define libvirt domain using xml: 
	I0815 18:58:25.308437   76538 main.go:141] libmachine: (calico-443473) <domain type='kvm'>
	I0815 18:58:25.308448   76538 main.go:141] libmachine: (calico-443473)   <name>calico-443473</name>
	I0815 18:58:25.308456   76538 main.go:141] libmachine: (calico-443473)   <memory unit='MiB'>3072</memory>
	I0815 18:58:25.308463   76538 main.go:141] libmachine: (calico-443473)   <vcpu>2</vcpu>
	I0815 18:58:25.308467   76538 main.go:141] libmachine: (calico-443473)   <features>
	I0815 18:58:25.308476   76538 main.go:141] libmachine: (calico-443473)     <acpi/>
	I0815 18:58:25.308483   76538 main.go:141] libmachine: (calico-443473)     <apic/>
	I0815 18:58:25.308509   76538 main.go:141] libmachine: (calico-443473)     <pae/>
	I0815 18:58:25.308520   76538 main.go:141] libmachine: (calico-443473)     
	I0815 18:58:25.308533   76538 main.go:141] libmachine: (calico-443473)   </features>
	I0815 18:58:25.308541   76538 main.go:141] libmachine: (calico-443473)   <cpu mode='host-passthrough'>
	I0815 18:58:25.308552   76538 main.go:141] libmachine: (calico-443473)   
	I0815 18:58:25.308561   76538 main.go:141] libmachine: (calico-443473)   </cpu>
	I0815 18:58:25.308569   76538 main.go:141] libmachine: (calico-443473)   <os>
	I0815 18:58:25.308577   76538 main.go:141] libmachine: (calico-443473)     <type>hvm</type>
	I0815 18:58:25.308582   76538 main.go:141] libmachine: (calico-443473)     <boot dev='cdrom'/>
	I0815 18:58:25.308594   76538 main.go:141] libmachine: (calico-443473)     <boot dev='hd'/>
	I0815 18:58:25.308624   76538 main.go:141] libmachine: (calico-443473)     <bootmenu enable='no'/>
	I0815 18:58:25.308645   76538 main.go:141] libmachine: (calico-443473)   </os>
	I0815 18:58:25.308659   76538 main.go:141] libmachine: (calico-443473)   <devices>
	I0815 18:58:25.308671   76538 main.go:141] libmachine: (calico-443473)     <disk type='file' device='cdrom'>
	I0815 18:58:25.308689   76538 main.go:141] libmachine: (calico-443473)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/calico-443473/boot2docker.iso'/>
	I0815 18:58:25.308701   76538 main.go:141] libmachine: (calico-443473)       <target dev='hdc' bus='scsi'/>
	I0815 18:58:25.308713   76538 main.go:141] libmachine: (calico-443473)       <readonly/>
	I0815 18:58:25.308726   76538 main.go:141] libmachine: (calico-443473)     </disk>
	I0815 18:58:25.308740   76538 main.go:141] libmachine: (calico-443473)     <disk type='file' device='disk'>
	I0815 18:58:25.308753   76538 main.go:141] libmachine: (calico-443473)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0815 18:58:25.308771   76538 main.go:141] libmachine: (calico-443473)       <source file='/home/jenkins/minikube-integration/19450-13013/.minikube/machines/calico-443473/calico-443473.rawdisk'/>
	I0815 18:58:25.308783   76538 main.go:141] libmachine: (calico-443473)       <target dev='hda' bus='virtio'/>
	I0815 18:58:25.308813   76538 main.go:141] libmachine: (calico-443473)     </disk>
	I0815 18:58:25.308836   76538 main.go:141] libmachine: (calico-443473)     <interface type='network'>
	I0815 18:58:25.308851   76538 main.go:141] libmachine: (calico-443473)       <source network='mk-calico-443473'/>
	I0815 18:58:25.308862   76538 main.go:141] libmachine: (calico-443473)       <model type='virtio'/>
	I0815 18:58:25.308874   76538 main.go:141] libmachine: (calico-443473)     </interface>
	I0815 18:58:25.308886   76538 main.go:141] libmachine: (calico-443473)     <interface type='network'>
	I0815 18:58:25.308898   76538 main.go:141] libmachine: (calico-443473)       <source network='default'/>
	I0815 18:58:25.308909   76538 main.go:141] libmachine: (calico-443473)       <model type='virtio'/>
	I0815 18:58:25.308922   76538 main.go:141] libmachine: (calico-443473)     </interface>
	I0815 18:58:25.308932   76538 main.go:141] libmachine: (calico-443473)     <serial type='pty'>
	I0815 18:58:25.308941   76538 main.go:141] libmachine: (calico-443473)       <target port='0'/>
	I0815 18:58:25.308952   76538 main.go:141] libmachine: (calico-443473)     </serial>
	I0815 18:58:25.308978   76538 main.go:141] libmachine: (calico-443473)     <console type='pty'>
	I0815 18:58:25.308995   76538 main.go:141] libmachine: (calico-443473)       <target type='serial' port='0'/>
	I0815 18:58:25.309006   76538 main.go:141] libmachine: (calico-443473)     </console>
	I0815 18:58:25.309016   76538 main.go:141] libmachine: (calico-443473)     <rng model='virtio'>
	I0815 18:58:25.309029   76538 main.go:141] libmachine: (calico-443473)       <backend model='random'>/dev/random</backend>
	I0815 18:58:25.309039   76538 main.go:141] libmachine: (calico-443473)     </rng>
	I0815 18:58:25.309049   76538 main.go:141] libmachine: (calico-443473)     
	I0815 18:58:25.309058   76538 main.go:141] libmachine: (calico-443473)     
	I0815 18:58:25.309069   76538 main.go:141] libmachine: (calico-443473)   </devices>
	I0815 18:58:25.309079   76538 main.go:141] libmachine: (calico-443473) </domain>
	I0815 18:58:25.309092   76538 main.go:141] libmachine: (calico-443473) 
	I0815 18:58:25.315940   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:fa:3c:8f in network default
	I0815 18:58:25.316706   76538 main.go:141] libmachine: (calico-443473) Ensuring networks are active...
	I0815 18:58:25.316723   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:25.317362   76538 main.go:141] libmachine: (calico-443473) Ensuring network default is active
	I0815 18:58:25.317638   76538 main.go:141] libmachine: (calico-443473) Ensuring network mk-calico-443473 is active
	I0815 18:58:25.318108   76538 main.go:141] libmachine: (calico-443473) Getting domain xml...
	I0815 18:58:25.318805   76538 main.go:141] libmachine: (calico-443473) Creating domain...
	I0815 18:58:26.649210   76538 main.go:141] libmachine: (calico-443473) Waiting to get IP...
	I0815 18:58:26.650120   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:26.650574   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:26.650599   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:26.650552   76933 retry.go:31] will retry after 242.777914ms: waiting for machine to come up
	I0815 18:58:26.895026   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:26.895674   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:26.895706   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:26.895623   76933 retry.go:31] will retry after 338.574417ms: waiting for machine to come up
	I0815 18:58:27.236356   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:27.237363   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:27.237401   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:27.237315   76933 retry.go:31] will retry after 333.060193ms: waiting for machine to come up
	I0815 18:58:27.571738   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:27.572349   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:27.572373   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:27.572300   76933 retry.go:31] will retry after 446.013244ms: waiting for machine to come up
	I0815 18:58:28.019725   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:28.020232   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:28.020259   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:28.020200   76933 retry.go:31] will retry after 650.490531ms: waiting for machine to come up
	I0815 18:58:25.098659   76153 pod_ready.go:103] pod "coredns-6f6b679f8f-bq7p8" in "kube-system" namespace has status "Ready":"False"
	I0815 18:58:27.598097   76153 pod_ready.go:103] pod "coredns-6f6b679f8f-bq7p8" in "kube-system" namespace has status "Ready":"False"
	I0815 18:58:26.384844   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetIP
	I0815 18:58:26.388299   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:26.388811   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:26.388838   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:26.389078   76334 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 18:58:26.393687   76334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:58:26.407274   76334 kubeadm.go:883] updating cluster {Name:kindnet-443473 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:kindnet-443473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:58:26.407380   76334 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:58:26.407437   76334 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:58:26.450984   76334 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:58:26.451044   76334 ssh_runner.go:195] Run: which lz4
	I0815 18:58:26.456145   76334 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:58:26.460704   76334 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:58:26.460726   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:58:27.888566   76334 crio.go:462] duration metric: took 1.432455118s to copy over tarball
	I0815 18:58:27.888652   76334 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:58:30.235781   76334 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.347099618s)
	I0815 18:58:30.235807   76334 crio.go:469] duration metric: took 2.347208741s to extract the tarball
	I0815 18:58:30.235816   76334 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:58:30.273098   76334 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:58:30.313426   76334 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:58:30.313457   76334 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:58:30.313467   76334 kubeadm.go:934] updating node { 192.168.50.168 8443 v1.31.0 crio true true} ...
	I0815 18:58:30.313597   76334 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-443473 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.168
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:kindnet-443473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0815 18:58:30.313683   76334 ssh_runner.go:195] Run: crio config
	I0815 18:58:30.358904   76334 cni.go:84] Creating CNI manager for "kindnet"
	I0815 18:58:30.358927   76334 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:58:30.358955   76334 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.168 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-443473 NodeName:kindnet-443473 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.168"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.168 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:58:30.359108   76334 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.168
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-443473"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.168
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.168"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:58:30.359183   76334 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:58:30.369010   76334 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:58:30.369067   76334 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:58:30.378543   76334 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0815 18:58:30.395513   76334 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:58:30.413005   76334 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0815 18:58:30.430411   76334 ssh_runner.go:195] Run: grep 192.168.50.168	control-plane.minikube.internal$ /etc/hosts
	I0815 18:58:30.434549   76334 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.168	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:58:30.447379   76334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:58:30.575136   76334 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:58:30.594561   76334 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473 for IP: 192.168.50.168
	I0815 18:58:30.594585   76334 certs.go:194] generating shared ca certs ...
	I0815 18:58:30.594604   76334 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:30.594778   76334 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:58:30.594832   76334 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:58:30.594845   76334 certs.go:256] generating profile certs ...
	I0815 18:58:30.594912   76334 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/client.key
	I0815 18:58:30.594929   76334 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/client.crt with IP's: []
	I0815 18:58:30.703890   76334 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/client.crt ...
	I0815 18:58:30.703929   76334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/client.crt: {Name:mke5080da981ffdacd853494f98244f53a20187b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:30.704117   76334 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/client.key ...
	I0815 18:58:30.704132   76334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/client.key: {Name:mk48934f04ce537b9ecfc613aa4289b2ec4fbf6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:30.704213   76334 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/apiserver.key.cefe5884
	I0815 18:58:30.704228   76334 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/apiserver.crt.cefe5884 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.168]
	I0815 18:58:30.787490   76334 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/apiserver.crt.cefe5884 ...
	I0815 18:58:30.787518   76334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/apiserver.crt.cefe5884: {Name:mkd85c75cef7822cfcdc01d2a350b849c4dc3867 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:30.787669   76334 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/apiserver.key.cefe5884 ...
	I0815 18:58:30.787682   76334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/apiserver.key.cefe5884: {Name:mk5ec78e6381481beff1fb33e87600174144c269 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:30.787754   76334 certs.go:381] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/apiserver.crt.cefe5884 -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/apiserver.crt
	I0815 18:58:30.787836   76334 certs.go:385] copying /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/apiserver.key.cefe5884 -> /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/apiserver.key
	I0815 18:58:30.787893   76334 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/proxy-client.key
	I0815 18:58:30.787909   76334 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/proxy-client.crt with IP's: []
	I0815 18:58:30.858539   76334 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/proxy-client.crt ...
	I0815 18:58:30.858568   76334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/proxy-client.crt: {Name:mk4d37da813a8a01ec9df0ba81005246ea5936d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:30.858722   76334 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/proxy-client.key ...
	I0815 18:58:30.858734   76334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/proxy-client.key: {Name:mkb29fd82ccb60a7c4c67721785e0289017d6e3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:30.858898   76334 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:58:30.858933   76334 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:58:30.858944   76334 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:58:30.858967   76334 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:58:30.858990   76334 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:58:30.859011   76334 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:58:30.859047   76334 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:58:30.859585   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:58:30.885899   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:58:30.909754   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:58:30.933699   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:58:30.956779   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 18:58:30.980432   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 18:58:31.005742   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:58:31.033468   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/kindnet-443473/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 18:58:31.058594   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:58:31.085210   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:58:31.109153   76334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:58:31.135292   76334 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:58:31.151420   76334 ssh_runner.go:195] Run: openssl version
	I0815 18:58:31.157082   76334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:58:31.167151   76334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:58:31.171546   76334 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:58:31.171599   76334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:58:31.177520   76334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:58:31.188129   76334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:58:31.198823   76334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:58:31.203345   76334 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:58:31.203399   76334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:58:31.209408   76334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:58:31.220556   76334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:58:31.231360   76334 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:58:31.235728   76334 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:58:31.235792   76334 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:58:31.241453   76334 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:58:31.252272   76334 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:58:31.257708   76334 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 18:58:31.257766   76334 kubeadm.go:392] StartCluster: {Name:kindnet-443473 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:kindnet-443473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.50.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:58:31.257843   76334 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:58:31.257917   76334 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:58:31.310047   76334 cri.go:89] found id: ""
	I0815 18:58:31.310152   76334 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:58:31.321929   76334 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:58:31.335025   76334 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:58:31.352701   76334 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:58:31.352718   76334 kubeadm.go:157] found existing configuration files:
	
	I0815 18:58:31.352757   76334 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:58:31.362780   76334 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:58:31.362841   76334 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:58:31.373264   76334 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:58:31.383210   76334 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:58:31.383277   76334 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:58:31.393255   76334 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:58:31.404203   76334 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:58:31.404255   76334 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:58:31.414323   76334 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:58:31.423973   76334 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:58:31.424033   76334 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:58:31.433426   76334 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:58:31.494767   76334 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 18:58:31.494842   76334 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:58:31.615034   76334 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:58:31.615208   76334 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:58:31.615358   76334 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 18:58:31.637722   76334 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:58:28.672009   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:28.672477   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:28.672529   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:28.672467   76933 retry.go:31] will retry after 796.181434ms: waiting for machine to come up
	I0815 18:58:29.470379   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:29.470893   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:29.470922   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:29.470844   76933 retry.go:31] will retry after 786.862434ms: waiting for machine to come up
	I0815 18:58:30.259169   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:30.259637   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:30.259666   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:30.259598   76933 retry.go:31] will retry after 1.296242604s: waiting for machine to come up
	I0815 18:58:31.557504   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:31.558029   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:31.558052   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:31.557964   76933 retry.go:31] will retry after 1.385617901s: waiting for machine to come up
	I0815 18:58:32.945652   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:32.946212   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:32.946240   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:32.946178   76933 retry.go:31] will retry after 1.906255022s: waiting for machine to come up
	I0815 18:58:31.867463   76334 out.go:235]   - Generating certificates and keys ...
	I0815 18:58:31.867587   76334 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:58:31.867671   76334 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:58:31.867801   76334 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 18:58:31.867898   76334 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 18:58:32.124632   76334 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 18:58:32.305183   76334 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 18:58:32.436341   76334 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 18:58:32.436663   76334 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-443473 localhost] and IPs [192.168.50.168 127.0.0.1 ::1]
	I0815 18:58:32.584433   76334 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 18:58:32.584683   76334 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-443473 localhost] and IPs [192.168.50.168 127.0.0.1 ::1]
	I0815 18:58:32.944953   76334 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 18:58:33.151863   76334 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 18:58:33.333001   76334 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 18:58:33.333240   76334 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:58:33.538447   76334 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:58:33.579438   76334 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 18:58:33.669202   76334 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:58:34.103687   76334 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:58:34.331390   76334 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:58:34.332471   76334 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:58:34.337110   76334 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:58:30.096149   76153 pod_ready.go:103] pod "coredns-6f6b679f8f-bq7p8" in "kube-system" namespace has status "Ready":"False"
	I0815 18:58:32.970472   76153 pod_ready.go:103] pod "coredns-6f6b679f8f-bq7p8" in "kube-system" namespace has status "Ready":"False"
	I0815 18:58:34.095727   76153 pod_ready.go:98] pod "coredns-6f6b679f8f-bq7p8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 18:58:33 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 18:58:20 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 18:58:20 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 18:58:20 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 18:58:20 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.187 HostIPs:[{IP:192.168.39
.187}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-15 18:58:20 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-15 18:58:21 +0000 UTC,FinishedAt:2024-08-15 18:58:32 +0000 UTC,ContainerID:cri-o://5fa3193d59df42812cebe8eee84fc70e257c37872707d691bf502c50e8d6ec8c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://5fa3193d59df42812cebe8eee84fc70e257c37872707d691bf502c50e8d6ec8c Started:0xc00188d440 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00067be10} {Name:kube-api-access-pjwq5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00067be20}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0815 18:58:34.095758   76153 pod_ready.go:82] duration metric: took 13.006920067s for pod "coredns-6f6b679f8f-bq7p8" in "kube-system" namespace to be "Ready" ...
	E0815 18:58:34.095769   76153 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-6f6b679f8f-bq7p8" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 18:58:33 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 18:58:20 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 18:58:20 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 18:58:20 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-08-15 18:58:20 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.187 HostIPs:[{IP:192.168.39.187}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-08-15 18:58:20 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-08-15 18:58:21 +0000 UTC,FinishedAt:2024-08-15 18:58:32 +0000 UTC,ContainerID:cri-o://5fa3193d59df42812cebe8eee84fc70e257c37872707d691bf502c50e8d6ec8c,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4 ContainerID:cri-o://5fa3193d59df42812cebe8eee84fc70e257c37872707d691bf502c50e8d6ec8c Started:0xc00188d440 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00067be10} {Name:kube-api-access-pjwq5 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc00067be20}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0815 18:58:34.095777   76153 pod_ready.go:79] waiting up to 15m0s for pod "coredns-6f6b679f8f-fsrw6" in "kube-system" namespace to be "Ready" ...
	I0815 18:58:34.339017   76334 out.go:235]   - Booting up control plane ...
	I0815 18:58:34.339148   76334 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:58:34.339251   76334 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:58:34.339347   76334 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:58:34.360183   76334 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:58:34.369836   76334 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:58:34.369898   76334 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:58:34.535878   76334 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 18:58:34.536035   76334 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 18:58:35.536734   76334 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001330587s
	I0815 18:58:35.536879   76334 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 18:58:34.854546   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:34.855073   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:34.855117   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:34.855044   76933 retry.go:31] will retry after 2.605974094s: waiting for machine to come up
	I0815 18:58:37.462888   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:37.463387   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:37.463419   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:37.463329   76933 retry.go:31] will retry after 3.475808838s: waiting for machine to come up
	I0815 18:58:36.104083   76153 pod_ready.go:103] pod "coredns-6f6b679f8f-fsrw6" in "kube-system" namespace has status "Ready":"False"
	I0815 18:58:38.602512   76153 pod_ready.go:103] pod "coredns-6f6b679f8f-fsrw6" in "kube-system" namespace has status "Ready":"False"
	I0815 18:58:40.536787   76334 kubeadm.go:310] [api-check] The API server is healthy after 5.000984641s
	I0815 18:58:40.556121   76334 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 18:58:40.580152   76334 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 18:58:40.613540   76334 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 18:58:40.614139   76334 kubeadm.go:310] [mark-control-plane] Marking the node kindnet-443473 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 18:58:40.630698   76334 kubeadm.go:310] [bootstrap-token] Using token: pq2gkm.unq2ugqcxn65kogn
	I0815 18:58:40.632223   76334 out.go:235]   - Configuring RBAC rules ...
	I0815 18:58:40.632374   76334 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 18:58:40.637881   76334 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 18:58:40.650154   76334 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 18:58:40.654181   76334 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 18:58:40.661117   76334 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 18:58:40.668959   76334 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 18:58:40.946885   76334 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 18:58:41.371829   76334 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 18:58:41.941460   76334 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 18:58:41.942697   76334 kubeadm.go:310] 
	I0815 18:58:41.942795   76334 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 18:58:41.942803   76334 kubeadm.go:310] 
	I0815 18:58:41.942887   76334 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 18:58:41.942895   76334 kubeadm.go:310] 
	I0815 18:58:41.942936   76334 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 18:58:41.942991   76334 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 18:58:41.943094   76334 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 18:58:41.943113   76334 kubeadm.go:310] 
	I0815 18:58:41.943161   76334 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 18:58:41.943170   76334 kubeadm.go:310] 
	I0815 18:58:41.943231   76334 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 18:58:41.943241   76334 kubeadm.go:310] 
	I0815 18:58:41.943331   76334 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 18:58:41.943452   76334 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 18:58:41.943547   76334 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 18:58:41.943556   76334 kubeadm.go:310] 
	I0815 18:58:41.943686   76334 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 18:58:41.943792   76334 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 18:58:41.943802   76334 kubeadm.go:310] 
	I0815 18:58:41.943942   76334 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pq2gkm.unq2ugqcxn65kogn \
	I0815 18:58:41.944089   76334 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 \
	I0815 18:58:41.944331   76334 kubeadm.go:310] 	--control-plane 
	I0815 18:58:41.944346   76334 kubeadm.go:310] 
	I0815 18:58:41.944446   76334 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 18:58:41.944456   76334 kubeadm.go:310] 
	I0815 18:58:41.944717   76334 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pq2gkm.unq2ugqcxn65kogn \
	I0815 18:58:41.944948   76334 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 
	I0815 18:58:41.945340   76334 kubeadm.go:310] W0815 18:58:31.477682     857 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:58:41.945678   76334 kubeadm.go:310] W0815 18:58:31.478602     857 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:58:41.945831   76334 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:58:41.945856   76334 cni.go:84] Creating CNI manager for "kindnet"
	I0815 18:58:41.947808   76334 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 18:58:40.940572   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:40.941014   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:40.941043   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:40.940976   76933 retry.go:31] will retry after 3.28886482s: waiting for machine to come up
	I0815 18:58:40.603156   76153 pod_ready.go:103] pod "coredns-6f6b679f8f-fsrw6" in "kube-system" namespace has status "Ready":"False"
	I0815 18:58:43.102611   76153 pod_ready.go:103] pod "coredns-6f6b679f8f-fsrw6" in "kube-system" namespace has status "Ready":"False"
	I0815 18:58:41.949234   76334 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 18:58:41.955265   76334 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 18:58:41.955281   76334 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 18:58:41.978524   76334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 18:58:42.279720   76334 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:58:42.279823   76334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:42.279878   76334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-443473 minikube.k8s.io/updated_at=2024_08_15T18_58_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=kindnet-443473 minikube.k8s.io/primary=true
	I0815 18:58:42.313765   76334 ops.go:34] apiserver oom_adj: -16
	I0815 18:58:42.486270   76334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:42.986634   76334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:43.486684   76334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:43.986433   76334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:44.486949   76334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:44.986372   76334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:45.486565   76334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:45.986721   76334 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:58:46.090942   76334 kubeadm.go:1113] duration metric: took 3.811187658s to wait for elevateKubeSystemPrivileges
	I0815 18:58:46.090974   76334 kubeadm.go:394] duration metric: took 14.833211215s to StartCluster
	I0815 18:58:46.090994   76334 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:46.091090   76334 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:58:46.092229   76334 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:58:46.092424   76334 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.50.168 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:58:46.092430   76334 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 18:58:46.092445   76334 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:58:46.092654   76334 addons.go:69] Setting storage-provisioner=true in profile "kindnet-443473"
	I0815 18:58:46.092669   76334 config.go:182] Loaded profile config "kindnet-443473": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:58:46.092678   76334 addons.go:69] Setting default-storageclass=true in profile "kindnet-443473"
	I0815 18:58:46.092687   76334 addons.go:234] Setting addon storage-provisioner=true in "kindnet-443473"
	I0815 18:58:46.092714   76334 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-443473"
	I0815 18:58:46.092720   76334 host.go:66] Checking if "kindnet-443473" exists ...
	I0815 18:58:46.093180   76334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:58:46.093197   76334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:58:46.093232   76334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:58:46.093335   76334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:58:46.094019   76334 out.go:177] * Verifying Kubernetes components...
	I0815 18:58:46.095567   76334 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:58:46.110835   76334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40951
	I0815 18:58:46.111154   76334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33851
	I0815 18:58:46.111356   76334 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:58:46.111577   76334 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:58:46.112039   76334 main.go:141] libmachine: Using API Version  1
	I0815 18:58:46.112059   76334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:58:46.112166   76334 main.go:141] libmachine: Using API Version  1
	I0815 18:58:46.112189   76334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:58:46.112434   76334 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:58:46.112525   76334 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:58:46.112650   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetState
	I0815 18:58:46.113070   76334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:58:46.113103   76334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:58:46.116205   76334 addons.go:234] Setting addon default-storageclass=true in "kindnet-443473"
	I0815 18:58:46.116251   76334 host.go:66] Checking if "kindnet-443473" exists ...
	I0815 18:58:46.116659   76334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:58:46.116694   76334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:58:46.131040   76334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46821
	I0815 18:58:46.131558   76334 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:58:46.132040   76334 main.go:141] libmachine: Using API Version  1
	I0815 18:58:46.132063   76334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:58:46.132389   76334 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:58:46.132658   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetState
	I0815 18:58:46.134421   76334 main.go:141] libmachine: (kindnet-443473) Calling .DriverName
	I0815 18:58:46.135101   76334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42101
	I0815 18:58:46.135537   76334 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:58:46.136192   76334 main.go:141] libmachine: Using API Version  1
	I0815 18:58:46.136214   76334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:58:46.136291   76334 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:58:46.136531   76334 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:58:46.137153   76334 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:58:46.137181   76334 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:58:46.137695   76334 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:58:46.137711   76334 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:58:46.137724   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHHostname
	I0815 18:58:46.140734   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:46.141153   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:46.141174   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:46.141389   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHPort
	I0815 18:58:46.141600   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:46.141760   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHUsername
	I0815 18:58:46.141910   76334 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473/id_rsa Username:docker}
	I0815 18:58:46.153208   76334 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
	I0815 18:58:46.153655   76334 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:58:46.154182   76334 main.go:141] libmachine: Using API Version  1
	I0815 18:58:46.154206   76334 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:58:46.154684   76334 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:58:46.154880   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetState
	I0815 18:58:46.156480   76334 main.go:141] libmachine: (kindnet-443473) Calling .DriverName
	I0815 18:58:46.156697   76334 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:58:46.156715   76334 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:58:46.156733   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHHostname
	I0815 18:58:46.159388   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:46.159771   76334 main.go:141] libmachine: (kindnet-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:87:4d", ip: ""} in network mk-kindnet-443473: {Iface:virbr2 ExpiryTime:2024-08-15 19:58:13 +0000 UTC Type:0 Mac:52:54:00:32:87:4d Iaid: IPaddr:192.168.50.168 Prefix:24 Hostname:kindnet-443473 Clientid:01:52:54:00:32:87:4d}
	I0815 18:58:46.159797   76334 main.go:141] libmachine: (kindnet-443473) DBG | domain kindnet-443473 has defined IP address 192.168.50.168 and MAC address 52:54:00:32:87:4d in network mk-kindnet-443473
	I0815 18:58:46.159926   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHPort
	I0815 18:58:46.160111   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHKeyPath
	I0815 18:58:46.160256   76334 main.go:141] libmachine: (kindnet-443473) Calling .GetSSHUsername
	I0815 18:58:46.160374   76334 sshutil.go:53] new ssh client: &{IP:192.168.50.168 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/kindnet-443473/id_rsa Username:docker}
	I0815 18:58:46.266762   76334 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 18:58:46.338227   76334 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:58:46.527351   76334 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:58:46.567374   76334 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:58:46.825900   76334 start.go:971] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0815 18:58:46.826916   76334 node_ready.go:35] waiting up to 15m0s for node "kindnet-443473" to be "Ready" ...
	I0815 18:58:46.879948   76334 main.go:141] libmachine: Making call to close driver server
	I0815 18:58:46.879972   76334 main.go:141] libmachine: (kindnet-443473) Calling .Close
	I0815 18:58:46.880268   76334 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:58:46.880289   76334 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:58:46.880298   76334 main.go:141] libmachine: Making call to close driver server
	I0815 18:58:46.880305   76334 main.go:141] libmachine: (kindnet-443473) Calling .Close
	I0815 18:58:46.880308   76334 main.go:141] libmachine: (kindnet-443473) DBG | Closing plugin on server side
	I0815 18:58:46.880553   76334 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:58:46.880586   76334 main.go:141] libmachine: (kindnet-443473) DBG | Closing plugin on server side
	I0815 18:58:46.880600   76334 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:58:46.918393   76334 main.go:141] libmachine: Making call to close driver server
	I0815 18:58:46.918416   76334 main.go:141] libmachine: (kindnet-443473) Calling .Close
	I0815 18:58:46.918803   76334 main.go:141] libmachine: (kindnet-443473) DBG | Closing plugin on server side
	I0815 18:58:46.918829   76334 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:58:46.918847   76334 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:58:47.288502   76334 main.go:141] libmachine: Making call to close driver server
	I0815 18:58:47.288528   76334 main.go:141] libmachine: (kindnet-443473) Calling .Close
	I0815 18:58:47.288830   76334 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:58:47.288849   76334 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:58:47.288858   76334 main.go:141] libmachine: Making call to close driver server
	I0815 18:58:47.288867   76334 main.go:141] libmachine: (kindnet-443473) Calling .Close
	I0815 18:58:47.289081   76334 main.go:141] libmachine: (kindnet-443473) DBG | Closing plugin on server side
	I0815 18:58:47.289145   76334 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:58:47.289179   76334 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:58:47.290942   76334 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0815 18:58:44.231216   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:44.231695   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find current IP address of domain calico-443473 in network mk-calico-443473
	I0815 18:58:44.231718   76538 main.go:141] libmachine: (calico-443473) DBG | I0815 18:58:44.231659   76933 retry.go:31] will retry after 3.812468343s: waiting for machine to come up
	I0815 18:58:48.046458   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:48.046827   76538 main.go:141] libmachine: (calico-443473) Found IP for machine: 192.168.72.112
	I0815 18:58:48.046854   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has current primary IP address 192.168.72.112 and MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:48.046864   76538 main.go:141] libmachine: (calico-443473) Reserving static IP address...
	I0815 18:58:48.047344   76538 main.go:141] libmachine: (calico-443473) DBG | unable to find host DHCP lease matching {name: "calico-443473", mac: "52:54:00:8e:82:a4", ip: "192.168.72.112"} in network mk-calico-443473
	I0815 18:58:48.122121   76538 main.go:141] libmachine: (calico-443473) Reserved static IP address: 192.168.72.112
	I0815 18:58:48.122152   76538 main.go:141] libmachine: (calico-443473) DBG | Getting to WaitForSSH function...
	I0815 18:58:48.122162   76538 main.go:141] libmachine: (calico-443473) Waiting for SSH to be available...
	I0815 18:58:48.125292   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:48.125891   76538 main.go:141] libmachine: (calico-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:82:a4", ip: ""} in network mk-calico-443473: {Iface:virbr4 ExpiryTime:2024-08-15 19:58:40 +0000 UTC Type:0 Mac:52:54:00:8e:82:a4 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8e:82:a4}
	I0815 18:58:48.125923   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined IP address 192.168.72.112 and MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:48.126080   76538 main.go:141] libmachine: (calico-443473) DBG | Using SSH client type: external
	I0815 18:58:48.126119   76538 main.go:141] libmachine: (calico-443473) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/calico-443473/id_rsa (-rw-------)
	I0815 18:58:48.126148   76538 main.go:141] libmachine: (calico-443473) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.112 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/calico-443473/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:58:48.126161   76538 main.go:141] libmachine: (calico-443473) DBG | About to run SSH command:
	I0815 18:58:48.126176   76538 main.go:141] libmachine: (calico-443473) DBG | exit 0
	I0815 18:58:48.252813   76538 main.go:141] libmachine: (calico-443473) DBG | SSH cmd err, output: <nil>: 
	I0815 18:58:48.253087   76538 main.go:141] libmachine: (calico-443473) KVM machine creation complete!
	I0815 18:58:48.253420   76538 main.go:141] libmachine: (calico-443473) Calling .GetConfigRaw
	I0815 18:58:48.253954   76538 main.go:141] libmachine: (calico-443473) Calling .DriverName
	I0815 18:58:48.254141   76538 main.go:141] libmachine: (calico-443473) Calling .DriverName
	I0815 18:58:48.254334   76538 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0815 18:58:48.254352   76538 main.go:141] libmachine: (calico-443473) Calling .GetState
	I0815 18:58:48.255830   76538 main.go:141] libmachine: Detecting operating system of created instance...
	I0815 18:58:48.255848   76538 main.go:141] libmachine: Waiting for SSH to be available...
	I0815 18:58:48.255864   76538 main.go:141] libmachine: Getting to WaitForSSH function...
	I0815 18:58:48.255873   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHHostname
	I0815 18:58:48.258408   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:48.258793   76538 main.go:141] libmachine: (calico-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:82:a4", ip: ""} in network mk-calico-443473: {Iface:virbr4 ExpiryTime:2024-08-15 19:58:40 +0000 UTC Type:0 Mac:52:54:00:8e:82:a4 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:calico-443473 Clientid:01:52:54:00:8e:82:a4}
	I0815 18:58:48.258818   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined IP address 192.168.72.112 and MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:48.258943   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHPort
	I0815 18:58:48.259093   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHKeyPath
	I0815 18:58:48.259204   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHKeyPath
	I0815 18:58:48.259361   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHUsername
	I0815 18:58:48.259587   76538 main.go:141] libmachine: Using SSH client type: native
	I0815 18:58:48.259797   76538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0815 18:58:48.259809   76538 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0815 18:58:48.364519   76538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:58:48.364547   76538 main.go:141] libmachine: Detecting the provisioner...
	I0815 18:58:48.364557   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHHostname
	I0815 18:58:48.367463   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:48.367909   76538 main.go:141] libmachine: (calico-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:82:a4", ip: ""} in network mk-calico-443473: {Iface:virbr4 ExpiryTime:2024-08-15 19:58:40 +0000 UTC Type:0 Mac:52:54:00:8e:82:a4 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:calico-443473 Clientid:01:52:54:00:8e:82:a4}
	I0815 18:58:48.367938   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined IP address 192.168.72.112 and MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:48.368115   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHPort
	I0815 18:58:48.368321   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHKeyPath
	I0815 18:58:48.368535   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHKeyPath
	I0815 18:58:48.368717   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHUsername
	I0815 18:58:48.368909   76538 main.go:141] libmachine: Using SSH client type: native
	I0815 18:58:48.369135   76538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0815 18:58:48.369152   76538 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0815 18:58:48.477464   76538 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0815 18:58:48.477546   76538 main.go:141] libmachine: found compatible host: buildroot
	I0815 18:58:48.477557   76538 main.go:141] libmachine: Provisioning with buildroot...
	I0815 18:58:48.477564   76538 main.go:141] libmachine: (calico-443473) Calling .GetMachineName
	I0815 18:58:48.477784   76538 buildroot.go:166] provisioning hostname "calico-443473"
	I0815 18:58:48.477809   76538 main.go:141] libmachine: (calico-443473) Calling .GetMachineName
	I0815 18:58:48.477994   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHHostname
	I0815 18:58:48.480308   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:48.480668   76538 main.go:141] libmachine: (calico-443473) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8e:82:a4", ip: ""} in network mk-calico-443473: {Iface:virbr4 ExpiryTime:2024-08-15 19:58:40 +0000 UTC Type:0 Mac:52:54:00:8e:82:a4 Iaid: IPaddr:192.168.72.112 Prefix:24 Hostname:calico-443473 Clientid:01:52:54:00:8e:82:a4}
	I0815 18:58:48.480705   76538 main.go:141] libmachine: (calico-443473) DBG | domain calico-443473 has defined IP address 192.168.72.112 and MAC address 52:54:00:8e:82:a4 in network mk-calico-443473
	I0815 18:58:48.480768   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHPort
	I0815 18:58:48.480939   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHKeyPath
	I0815 18:58:48.481087   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHKeyPath
	I0815 18:58:48.481246   76538 main.go:141] libmachine: (calico-443473) Calling .GetSSHUsername
	I0815 18:58:48.481413   76538 main.go:141] libmachine: Using SSH client type: native
	I0815 18:58:48.481624   76538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.112 22 <nil> <nil>}
	I0815 18:58:48.481638   76538 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-443473 && echo "calico-443473" | sudo tee /etc/hostname
	
	
	==> CRI-O <==
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.260196619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748329260162844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fead25c6-3e38-4a42-ac6a-4b6a37987aee name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.260935535Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07658504-5d01-4fb8-8cee-7f370d6fa8cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.260985988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07658504-5d01-4fb8-8cee-7f370d6fa8cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.261441062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747047326270992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905a73b877f297cda91cd2774858f2d95a9cf203fde6aa1e7e30eb8742f3bffc,PodSandboxId:3117121dfcf11740eeda723004bd1d01d3ba4aee940fa602d8ddf676c0a5713a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747027126183973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c26ca004-1d45-4ab6-ae7d-1e32614dccc0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99,PodSandboxId:bb96ed99d7d75ac456a668c56a179414052528008053df318d478956f082370f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747023962100509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-brc2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16add35-fdfd-4a39-8814-ec74318ae245,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad,PodSandboxId:7ca470c14cdbad4876f50ee655027b1b82b4b3a660a62a956146fce2af41dc7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747016539724119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bnxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3915f67-8
99a-40b9-bb2a-adef461b6320,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747016547520692,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-
203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3,PodSandboxId:c9d2271313634faa933ba3161e540740f18ae3acc12e7533c5bf81b3027daf77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747012866133608,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ec8dccc8d89d60ba8baa605ce2b0f7,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c,PodSandboxId:32831d409ffdb810f68e1d42e019909ed645f178a81ea873ae3b2f0077c65024,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747012825798717,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db054f45180592a2196fa4f7877
4bd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2,PodSandboxId:bddd685825c2e5da33fa039e58d3a24436a433bd2bf248f647748b337eb46ee2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747012801216445,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f228bce39c4a51992ab3fab5f6435
565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428,PodSandboxId:1ebe7207156d7e5166e9329af404eb04e485b8fd0237e7e7918aed03e6b71d16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747012795472193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bb35d1563e8e927ca812fbe5d87d1
8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07658504-5d01-4fb8-8cee-7f370d6fa8cd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.310305634Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=115e23c0-8792-44c2-94e9-b373af3f6bc0 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.310418417Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=115e23c0-8792-44c2-94e9-b373af3f6bc0 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.312288103Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=694e17b9-b0f7-40d1-b82d-c4ed4fc28df3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.312936108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748329312906820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=694e17b9-b0f7-40d1-b82d-c4ed4fc28df3 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.313597257Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=187757f4-147a-4cf1-a2de-67791085b923 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.313684932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=187757f4-147a-4cf1-a2de-67791085b923 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.314014519Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747047326270992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905a73b877f297cda91cd2774858f2d95a9cf203fde6aa1e7e30eb8742f3bffc,PodSandboxId:3117121dfcf11740eeda723004bd1d01d3ba4aee940fa602d8ddf676c0a5713a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747027126183973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c26ca004-1d45-4ab6-ae7d-1e32614dccc0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99,PodSandboxId:bb96ed99d7d75ac456a668c56a179414052528008053df318d478956f082370f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747023962100509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-brc2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16add35-fdfd-4a39-8814-ec74318ae245,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad,PodSandboxId:7ca470c14cdbad4876f50ee655027b1b82b4b3a660a62a956146fce2af41dc7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747016539724119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bnxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3915f67-8
99a-40b9-bb2a-adef461b6320,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747016547520692,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-
203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3,PodSandboxId:c9d2271313634faa933ba3161e540740f18ae3acc12e7533c5bf81b3027daf77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747012866133608,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ec8dccc8d89d60ba8baa605ce2b0f7,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c,PodSandboxId:32831d409ffdb810f68e1d42e019909ed645f178a81ea873ae3b2f0077c65024,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747012825798717,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db054f45180592a2196fa4f7877
4bd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2,PodSandboxId:bddd685825c2e5da33fa039e58d3a24436a433bd2bf248f647748b337eb46ee2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747012801216445,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f228bce39c4a51992ab3fab5f6435
565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428,PodSandboxId:1ebe7207156d7e5166e9329af404eb04e485b8fd0237e7e7918aed03e6b71d16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747012795472193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bb35d1563e8e927ca812fbe5d87d1
8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=187757f4-147a-4cf1-a2de-67791085b923 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.361698580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0e92a60e-19e7-405f-a2d4-823ad8b5f795 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.361785871Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0e92a60e-19e7-405f-a2d4-823ad8b5f795 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.362834601Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3ab9922-787a-4f49-93c0-2c0de42cc4ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.363427252Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748329363401338,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3ab9922-787a-4f49-93c0-2c0de42cc4ec name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.364017818Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19399c28-39fa-4d54-a1bc-ce474e0b89dd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.364085119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19399c28-39fa-4d54-a1bc-ce474e0b89dd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.364350132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747047326270992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905a73b877f297cda91cd2774858f2d95a9cf203fde6aa1e7e30eb8742f3bffc,PodSandboxId:3117121dfcf11740eeda723004bd1d01d3ba4aee940fa602d8ddf676c0a5713a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747027126183973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c26ca004-1d45-4ab6-ae7d-1e32614dccc0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99,PodSandboxId:bb96ed99d7d75ac456a668c56a179414052528008053df318d478956f082370f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747023962100509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-brc2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16add35-fdfd-4a39-8814-ec74318ae245,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad,PodSandboxId:7ca470c14cdbad4876f50ee655027b1b82b4b3a660a62a956146fce2af41dc7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747016539724119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bnxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3915f67-8
99a-40b9-bb2a-adef461b6320,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747016547520692,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-
203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3,PodSandboxId:c9d2271313634faa933ba3161e540740f18ae3acc12e7533c5bf81b3027daf77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747012866133608,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ec8dccc8d89d60ba8baa605ce2b0f7,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c,PodSandboxId:32831d409ffdb810f68e1d42e019909ed645f178a81ea873ae3b2f0077c65024,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747012825798717,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db054f45180592a2196fa4f7877
4bd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2,PodSandboxId:bddd685825c2e5da33fa039e58d3a24436a433bd2bf248f647748b337eb46ee2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747012801216445,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f228bce39c4a51992ab3fab5f6435
565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428,PodSandboxId:1ebe7207156d7e5166e9329af404eb04e485b8fd0237e7e7918aed03e6b71d16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747012795472193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bb35d1563e8e927ca812fbe5d87d1
8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=19399c28-39fa-4d54-a1bc-ce474e0b89dd name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.401003697Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27acdb9d-13fa-4a95-b3c1-effa9c086737 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.401087589Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27acdb9d-13fa-4a95-b3c1-effa9c086737 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.402816285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=afa37b2f-7c42-4933-89f0-9ddb05764bbf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.403303374Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748329403273276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=afa37b2f-7c42-4933-89f0-9ddb05764bbf name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.403920737Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b0ecd4a-6169-4c24-ad08-f051c2eb1ac3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.403979932Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b0ecd4a-6169-4c24-ad08-f051c2eb1ac3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:58:49 default-k8s-diff-port-423062 crio[723]: time="2024-08-15 18:58:49.404187271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747047326270992,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:905a73b877f297cda91cd2774858f2d95a9cf203fde6aa1e7e30eb8742f3bffc,PodSandboxId:3117121dfcf11740eeda723004bd1d01d3ba4aee940fa602d8ddf676c0a5713a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747027126183973,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: c26ca004-1d45-4ab6-ae7d-1e32614dccc0,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99,PodSandboxId:bb96ed99d7d75ac456a668c56a179414052528008053df318d478956f082370f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747023962100509,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-brc2r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d16add35-fdfd-4a39-8814-ec74318ae245,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad,PodSandboxId:7ca470c14cdbad4876f50ee655027b1b82b4b3a660a62a956146fce2af41dc7a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747016539724119,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bnxv7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3915f67-8
99a-40b9-bb2a-adef461b6320,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87,PodSandboxId:9533da6294cd4705e16ec5596fdafaf21404cd835a0a5ee8af682d70061bf13f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747016547520692,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9645f17f-82b6-4f8c-9a37-
203ed53fbea8,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3,PodSandboxId:c9d2271313634faa933ba3161e540740f18ae3acc12e7533c5bf81b3027daf77,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747012866133608,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c7ec8dccc8d89d60ba8baa605ce2b0f7,},Annotations:map[s
tring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c,PodSandboxId:32831d409ffdb810f68e1d42e019909ed645f178a81ea873ae3b2f0077c65024,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747012825798717,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: db054f45180592a2196fa4f7877
4bd19,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2,PodSandboxId:bddd685825c2e5da33fa039e58d3a24436a433bd2bf248f647748b337eb46ee2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747012801216445,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f228bce39c4a51992ab3fab5f6435
565,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428,PodSandboxId:1ebe7207156d7e5166e9329af404eb04e485b8fd0237e7e7918aed03e6b71d16,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747012795472193,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-423062,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50bb35d1563e8e927ca812fbe5d87d1
8,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b0ecd4a-6169-4c24-ad08-f051c2eb1ac3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5ba0de31ac4d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   9533da6294cd4       storage-provisioner
	905a73b877f29       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   3117121dfcf11       busybox
	4002a75569d01       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      21 minutes ago      Running             coredns                   1                   bb96ed99d7d75       coredns-6f6b679f8f-brc2r
	de97b6534ff12       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   9533da6294cd4       storage-provisioner
	78aa18ab3ca1d       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      21 minutes ago      Running             kube-proxy                1                   7ca470c14cdba       kube-proxy-bnxv7
	7c7302ebd91e3       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   c9d2271313634       etcd-default-k8s-diff-port-423062
	b5437880e3b54       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      21 minutes ago      Running             kube-controller-manager   1                   32831d409ffdb       kube-controller-manager-default-k8s-diff-port-423062
	4ff0eaf196e91       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      21 minutes ago      Running             kube-scheduler            1                   bddd685825c2e       kube-scheduler-default-k8s-diff-port-423062
	a728cb5e05d1d       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      21 minutes ago      Running             kube-apiserver            1                   1ebe7207156d7       kube-apiserver-default-k8s-diff-port-423062
	
	
	==> coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53751 - 15090 "HINFO IN 4697154533671768996.2502729668727686100. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016745811s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-423062
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-423062
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=default-k8s-diff-port-423062
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T18_29_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 18:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-423062
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:58:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 18:57:50 +0000   Thu, 15 Aug 2024 18:29:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 18:57:50 +0000   Thu, 15 Aug 2024 18:29:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 18:57:50 +0000   Thu, 15 Aug 2024 18:29:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 18:57:50 +0000   Thu, 15 Aug 2024 18:37:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.7
	  Hostname:    default-k8s-diff-port-423062
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1caebc083b84591add60167fa27e454
	  System UUID:                f1caebc0-83b8-4591-add6-0167fa27e454
	  Boot ID:                    d3a93374-75d3-4871-a6e0-5c63fd93ab57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 coredns-6f6b679f8f-brc2r                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-423062                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-423062             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-423062    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-bnxv7                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-423062             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-8mppk                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-423062 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-423062 event: Registered Node default-k8s-diff-port-423062 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-423062 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-423062 event: Registered Node default-k8s-diff-port-423062 in Controller
	
	
	==> dmesg <==
	[Aug15 18:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051782] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039090] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.882841] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.393540] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.577280] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.050377] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.064774] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.072686] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.218480] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +0.141172] systemd-fstab-generator[679]: Ignoring "noauto" option for root device
	[  +0.296997] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.233668] systemd-fstab-generator[806]: Ignoring "noauto" option for root device
	[  +0.061403] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.105962] systemd-fstab-generator[926]: Ignoring "noauto" option for root device
	[  +4.587601] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.458326] systemd-fstab-generator[1556]: Ignoring "noauto" option for root device
	[Aug15 18:37] kauditd_printk_skb: 64 callbacks suppressed
	[ +25.156722] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] <==
	{"level":"info","ts":"2024-08-15T18:56:30.633033Z","caller":"traceutil/trace.go:171","msg":"trace[105573486] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1576; }","duration":"138.270493ms","start":"2024-08-15T18:56:30.494754Z","end":"2024-08-15T18:56:30.633024Z","steps":["trace[105573486] 'agreement among raft nodes before linearized reading'  (duration: 138.043762ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:56:30.633236Z","caller":"traceutil/trace.go:171","msg":"trace[980674001] transaction","detail":"{read_only:false; response_revision:1576; number_of_response:1; }","duration":"140.633135ms","start":"2024-08-15T18:56:30.492588Z","end":"2024-08-15T18:56:30.633221Z","steps":["trace[980674001] 'process raft request'  (duration: 139.825146ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:56:54.474349Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1352}
	{"level":"info","ts":"2024-08-15T18:56:54.478478Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1352,"took":"3.615118ms","hash":281935730,"current-db-size-bytes":2633728,"current-db-size":"2.6 MB","current-db-size-in-use-bytes":1536000,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-08-15T18:56:54.478561Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":281935730,"revision":1352,"compact-revision":1108}
	{"level":"info","ts":"2024-08-15T18:57:21.460180Z","caller":"traceutil/trace.go:171","msg":"trace[2073282084] transaction","detail":"{read_only:false; response_revision:1617; number_of_response:1; }","duration":"578.56039ms","start":"2024-08-15T18:57:20.881592Z","end":"2024-08-15T18:57:21.460152Z","steps":["trace[2073282084] 'process raft request'  (duration: 578.381599ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:57:21.460502Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:57:20.881579Z","time spent":"578.823366ms","remote":"127.0.0.1:50622","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1616 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-15T18:57:22.004371Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.092572ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13993406796452284935 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-5jkv2fgqzwrcpkawffkhnpwubq\" mod_revision:1608 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-5jkv2fgqzwrcpkawffkhnpwubq\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-5jkv2fgqzwrcpkawffkhnpwubq\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-15T18:57:22.004919Z","caller":"traceutil/trace.go:171","msg":"trace[1050037279] linearizableReadLoop","detail":"{readStateIndex:1908; appliedIndex:1906; }","duration":"958.075582ms","start":"2024-08-15T18:57:21.046718Z","end":"2024-08-15T18:57:22.004794Z","steps":["trace[1050037279] 'read index received'  (duration: 413.379244ms)","trace[1050037279] 'applied index is now lower than readState.Index'  (duration: 544.694106ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-15T18:57:22.005064Z","caller":"traceutil/trace.go:171","msg":"trace[2141832949] transaction","detail":"{read_only:false; response_revision:1618; number_of_response:1; }","duration":"1.094342918s","start":"2024-08-15T18:57:20.910708Z","end":"2024-08-15T18:57:22.005051Z","steps":["trace[2141832949] 'process raft request'  (duration: 759.498411ms)","trace[2141832949] 'compare'  (duration: 333.889855ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:57:22.005157Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:57:20.910685Z","time spent":"1.094424547s","remote":"127.0.0.1:50714","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-5jkv2fgqzwrcpkawffkhnpwubq\" mod_revision:1608 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-5jkv2fgqzwrcpkawffkhnpwubq\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-5jkv2fgqzwrcpkawffkhnpwubq\" > >"}
	{"level":"warn","ts":"2024-08-15T18:57:22.005200Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"958.481096ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:57:22.005251Z","caller":"traceutil/trace.go:171","msg":"trace[379657767] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1618; }","duration":"958.526606ms","start":"2024-08-15T18:57:21.046714Z","end":"2024-08-15T18:57:22.005240Z","steps":["trace[379657767] 'agreement among raft nodes before linearized reading'  (duration: 958.467605ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:57:22.005277Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:57:21.046673Z","time spent":"958.597325ms","remote":"127.0.0.1:50472","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-15T18:57:22.005156Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"764.817721ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-15T18:57:22.005429Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"513.842326ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:57:22.005478Z","caller":"traceutil/trace.go:171","msg":"trace[2025905819] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1618; }","duration":"513.895132ms","start":"2024-08-15T18:57:21.491575Z","end":"2024-08-15T18:57:22.005470Z","steps":["trace[2025905819] 'agreement among raft nodes before linearized reading'  (duration: 513.828025ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:57:22.005502Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:57:21.491534Z","time spent":"513.961083ms","remote":"127.0.0.1:50638","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":27,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2024-08-15T18:57:22.005448Z","caller":"traceutil/trace.go:171","msg":"trace[748370699] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1618; }","duration":"765.12502ms","start":"2024-08-15T18:57:21.240314Z","end":"2024-08-15T18:57:22.005439Z","steps":["trace[748370699] 'agreement among raft nodes before linearized reading'  (duration: 764.801358ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:57:22.653912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.707571ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13993406796452284942 > lease_revoke:<id:42329157541349aa>","response":"size:27"}
	{"level":"info","ts":"2024-08-15T18:57:22.654035Z","caller":"traceutil/trace.go:171","msg":"trace[840077758] linearizableReadLoop","detail":"{readStateIndex:1909; appliedIndex:1908; }","duration":"162.098997ms","start":"2024-08-15T18:57:22.491919Z","end":"2024-08-15T18:57:22.654018Z","steps":["trace[840077758] 'read index received'  (duration: 31.233446ms)","trace[840077758] 'applied index is now lower than readState.Index'  (duration: 130.864213ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:57:22.654133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.250711ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:57:22.654157Z","caller":"traceutil/trace.go:171","msg":"trace[95935100] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1618; }","duration":"162.312157ms","start":"2024-08-15T18:57:22.491836Z","end":"2024-08-15T18:57:22.654148Z","steps":["trace[95935100] 'agreement among raft nodes before linearized reading'  (duration: 162.222647ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:58:32.308117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.058368ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13993406796452285363 > lease_revoke:<id:4232915754134b4d>","response":"size:27"}
	{"level":"info","ts":"2024-08-15T18:58:34.051776Z","caller":"traceutil/trace.go:171","msg":"trace[1915237608] transaction","detail":"{read_only:false; response_revision:1677; number_of_response:1; }","duration":"150.768653ms","start":"2024-08-15T18:58:33.900981Z","end":"2024-08-15T18:58:34.051749Z","steps":["trace[1915237608] 'process raft request'  (duration: 150.547068ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:58:49 up 22 min,  0 users,  load average: 0.17, 0.08, 0.08
	Linux default-k8s-diff-port-423062 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] <==
	I0815 18:54:56.676282       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:54:56.677422       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 18:56:55.677128       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:56:55.677351       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0815 18:56:56.679985       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:56:56.680054       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0815 18:56:56.680180       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:56:56.680311       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 18:56:56.681212       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:56:56.682412       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 18:57:56.681738       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:57:56.681802       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0815 18:57:56.682980       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:57:56.683062       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 18:57:56.683153       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:57:56.684213       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] <==
	E0815 18:53:29.527553       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:53:30.063107       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:53:59.533078       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:54:00.069687       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:54:29.540761       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:54:30.077192       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:54:59.547347       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:55:00.088611       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:55:29.553799       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:55:30.097619       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:55:59.560906       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:56:00.105998       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:56:29.569709       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:56:30.113772       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:56:59.577162       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:57:00.121335       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:57:29.584578       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:57:30.130355       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:57:50.470295       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-423062"
	E0815 18:57:59.591803       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:58:00.138713       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:58:29.173589       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="436.804µs"
	E0815 18:58:29.599067       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:58:30.146979       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:58:43.170688       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="90.001µs"
	
	
	==> kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 18:36:56.837357       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 18:36:56.857242       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.7"]
	E0815 18:36:56.857529       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 18:36:56.906030       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 18:36:56.906090       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 18:36:56.906126       1 server_linux.go:169] "Using iptables Proxier"
	I0815 18:36:56.912002       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 18:36:56.912282       1 server.go:483] "Version info" version="v1.31.0"
	I0815 18:36:56.912305       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:36:56.916483       1 config.go:197] "Starting service config controller"
	I0815 18:36:56.916514       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 18:36:56.916535       1 config.go:104] "Starting endpoint slice config controller"
	I0815 18:36:56.916539       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 18:36:56.917447       1 config.go:326] "Starting node config controller"
	I0815 18:36:56.917478       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 18:36:57.017102       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 18:36:57.017211       1 shared_informer.go:320] Caches are synced for service config
	I0815 18:36:57.017971       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] <==
	W0815 18:36:55.770017       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0815 18:36:55.770103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.770328       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 18:36:55.770428       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.770630       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0815 18:36:55.772743       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 18:36:55.772788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.772890       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 18:36:55.772922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773021       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 18:36:55.773051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0815 18:36:55.773077       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773111       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 18:36:55.773488       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 18:36:55.773537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773243       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 18:36:55.773555       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773386       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 18:36:55.773569       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 18:36:55.773667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:36:55.773765       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 18:36:55.773896       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0815 18:36:55.820130       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 18:57:50 default-k8s-diff-port-423062 kubelet[933]: E0815 18:57:50.154141     933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8mppk" podUID="27b1cd42-fec2-44d2-95f4-207d5aedb1db"
	Aug 15 18:57:52 default-k8s-diff-port-423062 kubelet[933]: E0815 18:57:52.166807     933 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 18:57:52 default-k8s-diff-port-423062 kubelet[933]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 18:57:52 default-k8s-diff-port-423062 kubelet[933]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 18:57:52 default-k8s-diff-port-423062 kubelet[933]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 18:57:52 default-k8s-diff-port-423062 kubelet[933]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 18:57:52 default-k8s-diff-port-423062 kubelet[933]: E0815 18:57:52.429161     933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748272428809892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:52 default-k8s-diff-port-423062 kubelet[933]: E0815 18:57:52.429184     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748272428809892,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:58:01 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:01.154345     933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8mppk" podUID="27b1cd42-fec2-44d2-95f4-207d5aedb1db"
	Aug 15 18:58:02 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:02.430705     933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748282430014048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:58:02 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:02.430740     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748282430014048,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:58:12 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:12.432172     933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748292431552272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:58:12 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:12.432214     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748292431552272,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:58:16 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:16.169241     933 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 15 18:58:16 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:16.169304     933 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 15 18:58:16 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:16.169477     933 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm8lq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-8mppk_kube-system(27b1cd42-fec2-44d2-95f4-207d5aedb1db): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Aug 15 18:58:16 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:16.170772     933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-8mppk" podUID="27b1cd42-fec2-44d2-95f4-207d5aedb1db"
	Aug 15 18:58:22 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:22.433473     933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748302433215358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:58:22 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:22.433814     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748302433215358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:58:29 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:29.154568     933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8mppk" podUID="27b1cd42-fec2-44d2-95f4-207d5aedb1db"
	Aug 15 18:58:32 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:32.442062     933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748312441247483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:58:32 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:32.442133     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748312441247483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:58:42 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:42.443297     933 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748322442968861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:58:42 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:42.443707     933 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748322442968861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:58:43 default-k8s-diff-port-423062 kubelet[933]: E0815 18:58:43.153773     933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8mppk" podUID="27b1cd42-fec2-44d2-95f4-207d5aedb1db"
	
	
	==> storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] <==
	I0815 18:37:27.427133       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 18:37:27.439588       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 18:37:27.439746       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 18:37:44.843976       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 18:37:44.844132       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-423062_213dbfc4-6ef0-4e02-8fb1-d789b64f197b!
	I0815 18:37:44.844978       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6dbad7f-8bb0-484b-9814-24ac362644b1", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-423062_213dbfc4-6ef0-4e02-8fb1-d789b64f197b became leader
	I0815 18:37:44.945011       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-423062_213dbfc4-6ef0-4e02-8fb1-d789b64f197b!
	
	
	==> storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] <==
	I0815 18:36:56.683595       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0815 18:37:26.690445       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-423062 -n default-k8s-diff-port-423062
E0815 18:58:50.523170   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-423062 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-8mppk
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-423062 describe pod metrics-server-6867b74b74-8mppk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-423062 describe pod metrics-server-6867b74b74-8mppk: exit status 1 (81.148516ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-8mppk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-423062 describe pod metrics-server-6867b74b74-8mppk: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (501.29s)
E0815 19:00:54.121367   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (415.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-555028 -n embed-certs-555028
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-15 18:57:34.906713175 +0000 UTC m=+6745.884818347
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-555028 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-555028 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.543µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-555028 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-555028 -n embed-certs-555028
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-555028 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-555028 logs -n 25: (1.293787146s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------|---------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                    |       Profile       |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|---------------------|---------|---------|---------------------|---------------------|
	| delete  | -p NoKubernetes-692760                     | NoKubernetes-692760 | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC | 15 Aug 24 18:23 UTC |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | iptables-save                              |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | iptables -t nat -L -n -v                   |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | systemctl status kubelet --all             |                     |         |         |                     |                     |
	|         | --full --no-pager                          |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | systemctl cat kubelet                      |                     |         |         |                     |                     |
	|         | --no-pager                                 |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | journalctl -xeu kubelet --all              |                     |         |         |                     |                     |
	|         | --full --no-pager                          |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo cat                 | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | /etc/kubernetes/kubelet.conf               |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo cat                 | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | /var/lib/kubelet/config.yaml               |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | systemctl status docker --all              |                     |         |         |                     |                     |
	|         | --full --no-pager                          |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | systemctl cat docker                       |                     |         |         |                     |                     |
	|         | --no-pager                                 |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo cat                 | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | /etc/docker/daemon.json                    |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo docker              | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | system info                                |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | systemctl status cri-docker                |                     |         |         |                     |                     |
	|         | --all --full --no-pager                    |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | systemctl cat cri-docker                   |                     |         |         |                     |                     |
	|         | --no-pager                                 |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo cat                 | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | cri-dockerd --version                      |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | systemctl status containerd                |                     |         |         |                     |                     |
	|         | --all --full --no-pager                    |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | systemctl cat containerd                   |                     |         |         |                     |                     |
	|         | --no-pager                                 |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo cat                 | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | /lib/systemd/system/containerd.service     |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo cat                 | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | /etc/containerd/config.toml                |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | containerd config dump                     |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | systemctl status crio --all                |                     |         |         |                     |                     |
	|         | --full --no-pager                          |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo                     | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | systemctl cat crio --no-pager              |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo find                | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | /etc/crio -type f -exec sh -c              |                     |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                       |                     |         |         |                     |                     |
	| ssh     | -p kubenet-443473 sudo crio                | kubenet-443473      | jenkins | v1.33.1 | 15 Aug 24 18:23 UTC |                     |
	|         | config                                     |                     |         |         |                     |                     |
	|---------|--------------------------------------------|---------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:57:35
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:57:35.042503   76153 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:57:35.042594   76153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:57:35.042601   76153 out.go:358] Setting ErrFile to fd 2...
	I0815 18:57:35.042606   76153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:57:35.042776   76153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:57:35.043358   76153 out.go:352] Setting JSON to false
	I0815 18:57:35.044285   76153 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9601,"bootTime":1723738654,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:57:35.044350   76153 start.go:139] virtualization: kvm guest
	I0815 18:57:35.046656   76153 out.go:177] * [auto-443473] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:57:35.048235   76153 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:57:35.048299   76153 notify.go:220] Checking for updates...
	I0815 18:57:35.050902   76153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:57:35.052349   76153 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:57:35.053770   76153 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:57:35.054960   76153 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:57:35.056231   76153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:57:35.057888   76153 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:57:35.058029   76153 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:57:35.058189   76153 config.go:182] Loaded profile config "newest-cni-828957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:57:35.058280   76153 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:57:35.558471   76153 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 18:57:35.559655   76153 start.go:297] selected driver: kvm2
	I0815 18:57:35.559675   76153 start.go:901] validating driver "kvm2" against <nil>
	I0815 18:57:35.559686   76153 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:57:35.560476   76153 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:57:35.560593   76153 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:57:35.577335   76153 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:57:35.577385   76153 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 18:57:35.577640   76153 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:57:35.577670   76153 cni.go:84] Creating CNI manager for ""
	I0815 18:57:35.577678   76153 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:57:35.577685   76153 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 18:57:35.577742   76153 start.go:340] cluster config:
	{Name:auto-443473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-443473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:57:35.577853   76153 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:57:35.580577   76153 out.go:177] * Starting "auto-443473" primary control-plane node in "auto-443473" cluster
	I0815 18:57:35.582566   76153 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:57:35.582614   76153 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:57:35.582624   76153 cache.go:56] Caching tarball of preloaded images
	I0815 18:57:35.582701   76153 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:57:35.582712   76153 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 18:57:35.582812   76153 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/config.json ...
	I0815 18:57:35.582831   76153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/auto-443473/config.json: {Name:mkd6dc41995462d61c92000f98e67d749a652afd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:57:35.582958   76153 start.go:360] acquireMachinesLock for auto-443473: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:57:35.582983   76153 start.go:364] duration metric: took 13.485µs to acquireMachinesLock for "auto-443473"
	I0815 18:57:35.582996   76153 start.go:93] Provisioning new machine with config: &{Name:auto-443473 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.0 ClusterName:auto-443473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:57:35.583052   76153 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.086423810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748256086391994,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5bb70d9e-c4de-4254-a381-21e213108774 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.087178374Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=02d455cb-0e5d-4ed8-b088-593dff5d5877 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.087267833Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=02d455cb-0e5d-4ed8-b088-593dff5d5877 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.087581952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbe7e7cc12a24a7735aca0a3420aa993a88b5226b3fe7139154d5de11e8a2cd,PodSandboxId:2bee519619535082644fe996c8b8fbb83d70e601fe2096259878aca2111f98db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747293941754231,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6979830-492e-4ef7-960f-2d4756de1c8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cbd61f4bfab395140a4bcfbd6c044a651fe8a6568295ec7ee7f4b5e4ca1923,PodSandboxId:661bee4cd9442dbb4799db303272bf39168ead51cc18b84e439cf9c131bd132c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293418149691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rc947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d041322-9d6b-4f46-8f58-e2991f34a297,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfbb9e69688febac549ece79607c35cd8312fd7b3aa6aacce1b6cd62087dee23,PodSandboxId:14b164f9ba75378fb5eb2dde1f5dd63af841fd35173bc236e45de1fe1818f34d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293393435426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mf6q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
5f7f959-715b-48a1-9f85-f267614182f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05f410d5291c15c563a6bcb1f17784bebfbcc573d03cf66653cc4009dcce3d60,PodSandboxId:21244b9c171a08ce8ce0df6e42b966866e2be778e8645f28b431000908dbc672,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723747292717441345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ktczt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e5b692-edd5-48fd-879b-7b8da4dea9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c021e3026550c85b8c2604df475739138eabcfe297c2068b1e3dbccb20363202,PodSandboxId:288e460fbf36cb3325b66c511b3800e477442ef7918fd5be592c9cca7575d44f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747281489372370,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7be271d3c560008ab55525ae8d1647,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9992921abedae8effdb9b902c483fbbfe9ba2137c8f11ad61f713bbe2af7b,PodSandboxId:045b1bc78063e67745939f5c01b8bf7e68904f5571e29cecb54a33fcab375408,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747281483006224,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef05ad509ee70652c70b4613499d46bbce0b9aa17ab7204b38372de527733a29,PodSandboxId:72b366a683f32ceb84eaa9817abcde93225b8cc46e31b3ed361cb0836b047fa1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747281516659176,Labels:map[string]string{io.kubernetes.container.name: kube-sch
eduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61757fe39b4aeb4552b1709a7caa21c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65efc88617436f3abba01473a06b2e072d597d3178601795075a0ab9dff0fd,PodSandboxId:81d4b953fb109076577f2ef42ccd5bb0d1ae555d6b29cfa93d9d8b9c4eb27a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747281419870562,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e277384463d451b36e4fbd6f3eedcba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d454829c8090564419cb10cf3985e4237627f74a594acfdec2f1f412d28127,PodSandboxId:28629532ce90ad5195501dc9d9c6c016481208aa96b69eb7d120166ac83f6f3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723746994145532058,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=02d455cb-0e5d-4ed8-b088-593dff5d5877 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.133625875Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=23cf3c1b-c9c4-4968-a7d0-1a008df1f28f name=/runtime.v1.RuntimeService/Version
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.134071879Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=23cf3c1b-c9c4-4968-a7d0-1a008df1f28f name=/runtime.v1.RuntimeService/Version
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.137675297Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3cc7437a-e247-456a-89e4-8faf974f8a2d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.142614438Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748256142525456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3cc7437a-e247-456a-89e4-8faf974f8a2d name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.143899217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=126c8b27-7856-429c-af08-bd3c15c09ce3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.143949069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=126c8b27-7856-429c-af08-bd3c15c09ce3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.144143068Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbe7e7cc12a24a7735aca0a3420aa993a88b5226b3fe7139154d5de11e8a2cd,PodSandboxId:2bee519619535082644fe996c8b8fbb83d70e601fe2096259878aca2111f98db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747293941754231,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6979830-492e-4ef7-960f-2d4756de1c8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cbd61f4bfab395140a4bcfbd6c044a651fe8a6568295ec7ee7f4b5e4ca1923,PodSandboxId:661bee4cd9442dbb4799db303272bf39168ead51cc18b84e439cf9c131bd132c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293418149691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rc947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d041322-9d6b-4f46-8f58-e2991f34a297,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfbb9e69688febac549ece79607c35cd8312fd7b3aa6aacce1b6cd62087dee23,PodSandboxId:14b164f9ba75378fb5eb2dde1f5dd63af841fd35173bc236e45de1fe1818f34d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293393435426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mf6q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
5f7f959-715b-48a1-9f85-f267614182f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05f410d5291c15c563a6bcb1f17784bebfbcc573d03cf66653cc4009dcce3d60,PodSandboxId:21244b9c171a08ce8ce0df6e42b966866e2be778e8645f28b431000908dbc672,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723747292717441345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ktczt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e5b692-edd5-48fd-879b-7b8da4dea9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c021e3026550c85b8c2604df475739138eabcfe297c2068b1e3dbccb20363202,PodSandboxId:288e460fbf36cb3325b66c511b3800e477442ef7918fd5be592c9cca7575d44f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747281489372370,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7be271d3c560008ab55525ae8d1647,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9992921abedae8effdb9b902c483fbbfe9ba2137c8f11ad61f713bbe2af7b,PodSandboxId:045b1bc78063e67745939f5c01b8bf7e68904f5571e29cecb54a33fcab375408,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747281483006224,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef05ad509ee70652c70b4613499d46bbce0b9aa17ab7204b38372de527733a29,PodSandboxId:72b366a683f32ceb84eaa9817abcde93225b8cc46e31b3ed361cb0836b047fa1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747281516659176,Labels:map[string]string{io.kubernetes.container.name: kube-sch
eduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61757fe39b4aeb4552b1709a7caa21c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65efc88617436f3abba01473a06b2e072d597d3178601795075a0ab9dff0fd,PodSandboxId:81d4b953fb109076577f2ef42ccd5bb0d1ae555d6b29cfa93d9d8b9c4eb27a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747281419870562,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e277384463d451b36e4fbd6f3eedcba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d454829c8090564419cb10cf3985e4237627f74a594acfdec2f1f412d28127,PodSandboxId:28629532ce90ad5195501dc9d9c6c016481208aa96b69eb7d120166ac83f6f3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723746994145532058,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=126c8b27-7856-429c-af08-bd3c15c09ce3 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.187303781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02a70d1a-7cb5-4861-926d-7f2a2e6e6105 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.187400079Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02a70d1a-7cb5-4861-926d-7f2a2e6e6105 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.188972460Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=57194d5e-3ef5-4cef-8a4b-7aad00ba3fa6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.189851538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748256189823424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=57194d5e-3ef5-4cef-8a4b-7aad00ba3fa6 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.190443058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd80842a-d4db-46fa-a51d-f37c2d04b590 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.190574455Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd80842a-d4db-46fa-a51d-f37c2d04b590 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.190836571Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbe7e7cc12a24a7735aca0a3420aa993a88b5226b3fe7139154d5de11e8a2cd,PodSandboxId:2bee519619535082644fe996c8b8fbb83d70e601fe2096259878aca2111f98db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747293941754231,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6979830-492e-4ef7-960f-2d4756de1c8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cbd61f4bfab395140a4bcfbd6c044a651fe8a6568295ec7ee7f4b5e4ca1923,PodSandboxId:661bee4cd9442dbb4799db303272bf39168ead51cc18b84e439cf9c131bd132c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293418149691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rc947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d041322-9d6b-4f46-8f58-e2991f34a297,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfbb9e69688febac549ece79607c35cd8312fd7b3aa6aacce1b6cd62087dee23,PodSandboxId:14b164f9ba75378fb5eb2dde1f5dd63af841fd35173bc236e45de1fe1818f34d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293393435426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mf6q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
5f7f959-715b-48a1-9f85-f267614182f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05f410d5291c15c563a6bcb1f17784bebfbcc573d03cf66653cc4009dcce3d60,PodSandboxId:21244b9c171a08ce8ce0df6e42b966866e2be778e8645f28b431000908dbc672,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723747292717441345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ktczt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e5b692-edd5-48fd-879b-7b8da4dea9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c021e3026550c85b8c2604df475739138eabcfe297c2068b1e3dbccb20363202,PodSandboxId:288e460fbf36cb3325b66c511b3800e477442ef7918fd5be592c9cca7575d44f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747281489372370,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7be271d3c560008ab55525ae8d1647,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9992921abedae8effdb9b902c483fbbfe9ba2137c8f11ad61f713bbe2af7b,PodSandboxId:045b1bc78063e67745939f5c01b8bf7e68904f5571e29cecb54a33fcab375408,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747281483006224,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef05ad509ee70652c70b4613499d46bbce0b9aa17ab7204b38372de527733a29,PodSandboxId:72b366a683f32ceb84eaa9817abcde93225b8cc46e31b3ed361cb0836b047fa1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747281516659176,Labels:map[string]string{io.kubernetes.container.name: kube-sch
eduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61757fe39b4aeb4552b1709a7caa21c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65efc88617436f3abba01473a06b2e072d597d3178601795075a0ab9dff0fd,PodSandboxId:81d4b953fb109076577f2ef42ccd5bb0d1ae555d6b29cfa93d9d8b9c4eb27a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747281419870562,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e277384463d451b36e4fbd6f3eedcba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d454829c8090564419cb10cf3985e4237627f74a594acfdec2f1f412d28127,PodSandboxId:28629532ce90ad5195501dc9d9c6c016481208aa96b69eb7d120166ac83f6f3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723746994145532058,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd80842a-d4db-46fa-a51d-f37c2d04b590 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.230717205Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ca7543b-413f-4035-8afc-f95403f405ee name=/runtime.v1.RuntimeService/Version
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.230852306Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ca7543b-413f-4035-8afc-f95403f405ee name=/runtime.v1.RuntimeService/Version
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.232299612Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17b1eb2a-7886-4bce-95ea-a5ad44e01f9f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.233155261Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748256233119651,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17b1eb2a-7886-4bce-95ea-a5ad44e01f9f name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.234259463Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d435bea0-da08-4099-8bb4-27ad0627d294 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.234350175Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d435bea0-da08-4099-8bb4-27ad0627d294 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:36 embed-certs-555028 crio[731]: time="2024-08-15 18:57:36.234864465Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bdbe7e7cc12a24a7735aca0a3420aa993a88b5226b3fe7139154d5de11e8a2cd,PodSandboxId:2bee519619535082644fe996c8b8fbb83d70e601fe2096259878aca2111f98db,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747293941754231,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6979830-492e-4ef7-960f-2d4756de1c8f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35cbd61f4bfab395140a4bcfbd6c044a651fe8a6568295ec7ee7f4b5e4ca1923,PodSandboxId:661bee4cd9442dbb4799db303272bf39168ead51cc18b84e439cf9c131bd132c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293418149691,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-rc947,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d041322-9d6b-4f46-8f58-e2991f34a297,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bfbb9e69688febac549ece79607c35cd8312fd7b3aa6aacce1b6cd62087dee23,PodSandboxId:14b164f9ba75378fb5eb2dde1f5dd63af841fd35173bc236e45de1fe1818f34d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747293393435426,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-mf6q4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
5f7f959-715b-48a1-9f85-f267614182f1,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05f410d5291c15c563a6bcb1f17784bebfbcc573d03cf66653cc4009dcce3d60,PodSandboxId:21244b9c171a08ce8ce0df6e42b966866e2be778e8645f28b431000908dbc672,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt
:1723747292717441345,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ktczt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5e5b692-edd5-48fd-879b-7b8da4dea9fd,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c021e3026550c85b8c2604df475739138eabcfe297c2068b1e3dbccb20363202,PodSandboxId:288e460fbf36cb3325b66c511b3800e477442ef7918fd5be592c9cca7575d44f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747281489372370,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc7be271d3c560008ab55525ae8d1647,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9992921abedae8effdb9b902c483fbbfe9ba2137c8f11ad61f713bbe2af7b,PodSandboxId:045b1bc78063e67745939f5c01b8bf7e68904f5571e29cecb54a33fcab375408,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747281483006224,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef05ad509ee70652c70b4613499d46bbce0b9aa17ab7204b38372de527733a29,PodSandboxId:72b366a683f32ceb84eaa9817abcde93225b8cc46e31b3ed361cb0836b047fa1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747281516659176,Labels:map[string]string{io.kubernetes.container.name: kube-sch
eduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 61757fe39b4aeb4552b1709a7caa21c5,},Annotations:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e65efc88617436f3abba01473a06b2e072d597d3178601795075a0ab9dff0fd,PodSandboxId:81d4b953fb109076577f2ef42ccd5bb0d1ae555d6b29cfa93d9d8b9c4eb27a43,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747281419870562,Labels:map[string]string{io.kubernetes.container.name: kube-contro
ller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e277384463d451b36e4fbd6f3eedcba,},Annotations:map[string]string{io.kubernetes.container.hash: 3994b1a4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89d454829c8090564419cb10cf3985e4237627f74a594acfdec2f1f412d28127,PodSandboxId:28629532ce90ad5195501dc9d9c6c016481208aa96b69eb7d120166ac83f6f3c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_EXITED,CreatedAt:1723746994145532058,Labels:map[string]string{io.kubernetes.container.name: kube-a
piserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-555028,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9f8b5d8d4498eb14b4cc32d787c1b32,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d435bea0-da08-4099-8bb4-27ad0627d294 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bdbe7e7cc12a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   2bee519619535       storage-provisioner
	35cbd61f4bfab       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   661bee4cd9442       coredns-6f6b679f8f-rc947
	bfbb9e69688fe       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   16 minutes ago      Running             coredns                   0                   14b164f9ba753       coredns-6f6b679f8f-mf6q4
	05f410d5291c1       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494   16 minutes ago      Running             kube-proxy                0                   21244b9c171a0       kube-proxy-ktczt
	ef05ad509ee70       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94   16 minutes ago      Running             kube-scheduler            2                   72b366a683f32       kube-scheduler-embed-certs-555028
	c021e3026550c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   288e460fbf36c       etcd-embed-certs-555028
	e3c9992921abe       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   16 minutes ago      Running             kube-apiserver            2                   045b1bc78063e       kube-apiserver-embed-certs-555028
	8e65efc886174       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1   16 minutes ago      Running             kube-controller-manager   2                   81d4b953fb109       kube-controller-manager-embed-certs-555028
	89d454829c809       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3   21 minutes ago      Exited              kube-apiserver            1                   28629532ce90a       kube-apiserver-embed-certs-555028
	
	
	==> coredns [35cbd61f4bfab395140a4bcfbd6c044a651fe8a6568295ec7ee7f4b5e4ca1923] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [bfbb9e69688febac549ece79607c35cd8312fd7b3aa6aacce1b6cd62087dee23] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               embed-certs-555028
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-555028
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=embed-certs-555028
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T18_41_27_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 18:41:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-555028
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:57:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 18:56:56 +0000   Thu, 15 Aug 2024 18:41:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 18:56:56 +0000   Thu, 15 Aug 2024 18:41:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 18:56:56 +0000   Thu, 15 Aug 2024 18:41:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 18:56:56 +0000   Thu, 15 Aug 2024 18:41:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.234
	  Hostname:    embed-certs-555028
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58337ac85e14457bba146b4596c6a76a
	  System UUID:                58337ac8-5e14-457b-ba14-6b4596c6a76a
	  Boot ID:                    2d528187-5591-4970-93a9-8a059bc290b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-mf6q4                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-6f6b679f8f-rc947                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-555028                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-555028             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-555028    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-ktczt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-555028             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-zkpx5               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-555028 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-555028 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-555028 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-555028 event: Registered Node embed-certs-555028 in Controller
	
	
	==> dmesg <==
	[  +0.049950] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039053] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.784825] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.514965] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.570133] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.825722] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.059727] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.088931] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.182480] systemd-fstab-generator[674]: Ignoring "noauto" option for root device
	[  +0.146928] systemd-fstab-generator[686]: Ignoring "noauto" option for root device
	[  +0.315189] systemd-fstab-generator[715]: Ignoring "noauto" option for root device
	[  +4.227918] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +0.060750] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.650920] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +4.583842] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.854177] kauditd_printk_skb: 85 callbacks suppressed
	[Aug15 18:41] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.473520] systemd-fstab-generator[2588]: Ignoring "noauto" option for root device
	[  +4.958275] kauditd_printk_skb: 54 callbacks suppressed
	[  +2.130220] systemd-fstab-generator[2913]: Ignoring "noauto" option for root device
	[  +4.879520] systemd-fstab-generator[3034]: Ignoring "noauto" option for root device
	[  +0.117904] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.121404] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [c021e3026550c85b8c2604df475739138eabcfe297c2068b1e3dbccb20363202] <==
	{"level":"info","ts":"2024-08-15T18:41:22.131718Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:41:22.133537Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T18:41:22.133596Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T18:41:22.135751Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:41:22.147983Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T18:41:22.148789Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.234:2379"}
	{"level":"info","ts":"2024-08-15T18:41:22.149833Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T18:51:22.693310Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":683}
	{"level":"info","ts":"2024-08-15T18:51:22.701521Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":683,"took":"7.546107ms","hash":93839246,"current-db-size-bytes":2220032,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2220032,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-08-15T18:51:22.701611Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":93839246,"revision":683,"compact-revision":-1}
	{"level":"info","ts":"2024-08-15T18:56:22.699726Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":926}
	{"level":"info","ts":"2024-08-15T18:56:22.703958Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":926,"took":"3.786188ms","hash":962924074,"current-db-size-bytes":2220032,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":1650688,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-08-15T18:56:22.704014Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":962924074,"revision":926,"compact-revision":683}
	{"level":"warn","ts":"2024-08-15T18:56:30.872018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.098117ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:56:30.872404Z","caller":"traceutil/trace.go:171","msg":"trace[452388962] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:1177; }","duration":"144.550736ms","start":"2024-08-15T18:56:30.727824Z","end":"2024-08-15T18:56:30.872375Z","steps":["trace[452388962] 'count revisions from in-memory index tree'  (duration: 143.944707ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:57:21.094826Z","caller":"traceutil/trace.go:171","msg":"trace[482730793] linearizableReadLoop","detail":"{readStateIndex:1424; appliedIndex:1423; }","duration":"103.41198ms","start":"2024-08-15T18:57:20.991401Z","end":"2024-08-15T18:57:21.094813Z","steps":["trace[482730793] 'read index received'  (duration: 103.208553ms)","trace[482730793] 'applied index is now lower than readState.Index'  (duration: 202.627µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:57:21.095078Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.635377ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:57:21.095110Z","caller":"traceutil/trace.go:171","msg":"trace[712136588] transaction","detail":"{read_only:false; response_revision:1219; number_of_response:1; }","duration":"134.784602ms","start":"2024-08-15T18:57:20.960303Z","end":"2024-08-15T18:57:21.095087Z","steps":["trace[712136588] 'process raft request'  (duration: 134.392223ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:57:21.095122Z","caller":"traceutil/trace.go:171","msg":"trace[261690488] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1219; }","duration":"103.738811ms","start":"2024-08-15T18:57:20.991377Z","end":"2024-08-15T18:57:21.095116Z","steps":["trace[261690488] 'agreement among raft nodes before linearized reading'  (duration: 103.618582ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:57:22.147155Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"322.407389ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7911858449710586417 > lease_revoke:<id:6dcc9157582c19d5>","response":"size:29"}
	{"level":"info","ts":"2024-08-15T18:57:22.147314Z","caller":"traceutil/trace.go:171","msg":"trace[846080392] linearizableReadLoop","detail":"{readStateIndex:1425; appliedIndex:1424; }","duration":"528.27092ms","start":"2024-08-15T18:57:21.619030Z","end":"2024-08-15T18:57:22.147301Z","steps":["trace[846080392] 'read index received'  (duration: 205.38645ms)","trace[846080392] 'applied index is now lower than readState.Index'  (duration: 322.883185ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:57:22.147416Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"528.373476ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:57:22.147577Z","caller":"traceutil/trace.go:171","msg":"trace[1646342017] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1219; }","duration":"528.54113ms","start":"2024-08-15T18:57:21.619026Z","end":"2024-08-15T18:57:22.147567Z","steps":["trace[1646342017] 'agreement among raft nodes before linearized reading'  (duration: 528.35636ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:57:22.148271Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.023898ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:57:22.148329Z","caller":"traceutil/trace.go:171","msg":"trace[1383209282] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1219; }","duration":"156.099025ms","start":"2024-08-15T18:57:21.992220Z","end":"2024-08-15T18:57:22.148319Z","steps":["trace[1383209282] 'agreement among raft nodes before linearized reading'  (duration: 156.000513ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:57:36 up 21 min,  0 users,  load average: 0.14, 0.15, 0.14
	Linux embed-certs-555028 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [89d454829c8090564419cb10cf3985e4237627f74a594acfdec2f1f412d28127] <==
	W0815 18:41:14.281761       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.375361       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.405403       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.422560       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.444651       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.475637       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.551062       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.552414       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.559296       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.565995       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.577708       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.692164       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.742545       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.760277       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.782856       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.797698       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.852610       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.938685       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.962183       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:14.981079       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:15.156824       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:15.164648       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:15.164899       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:15.211726       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0815 18:41:15.272041       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e3c9992921abedae8effdb9b902c483fbbfe9ba2137c8f11ad61f713bbe2af7b] <==
	I0815 18:54:25.382899       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:54:25.382980       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 18:56:24.384194       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:56:24.384374       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0815 18:56:25.385729       1 handler_proxy.go:99] no RequestInfo found in the context
	W0815 18:56:25.385970       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:56:25.386049       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0815 18:56:25.385978       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 18:56:25.387276       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:56:25.387411       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 18:57:25.388553       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:57:25.388702       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0815 18:57:25.388834       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:57:25.388892       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0815 18:57:25.390030       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:57:25.390139       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8e65efc88617436f3abba01473a06b2e072d597d3178601795075a0ab9dff0fd] <==
	E0815 18:52:31.436990       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:52:31.910807       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:52:42.232744       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="149.121µs"
	E0815 18:53:01.444042       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:53:01.919987       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:53:31.450139       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:53:31.938137       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:54:01.456908       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:54:01.946229       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:54:31.463208       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:54:31.954893       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:55:01.469797       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:55:01.963411       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:55:31.475819       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:55:31.972574       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:56:01.482578       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:56:01.980105       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:56:31.490638       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:56:31.990677       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:56:56.262835       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-555028"
	E0815 18:57:01.497642       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:57:01.999363       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:57:31.237978       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="906.267µs"
	E0815 18:57:31.503542       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:57:32.008647       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [05f410d5291c15c563a6bcb1f17784bebfbcc573d03cf66653cc4009dcce3d60] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 18:41:33.211781       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 18:41:33.255172       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.234"]
	E0815 18:41:33.255416       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 18:41:33.478156       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 18:41:33.478238       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 18:41:33.478268       1 server_linux.go:169] "Using iptables Proxier"
	I0815 18:41:33.504829       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 18:41:33.505127       1 server.go:483] "Version info" version="v1.31.0"
	I0815 18:41:33.505159       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:41:33.566402       1 config.go:197] "Starting service config controller"
	I0815 18:41:33.566448       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 18:41:33.566524       1 config.go:104] "Starting endpoint slice config controller"
	I0815 18:41:33.566546       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 18:41:33.579207       1 config.go:326] "Starting node config controller"
	I0815 18:41:33.579297       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 18:41:33.695814       1 shared_informer.go:320] Caches are synced for node config
	I0815 18:41:33.696226       1 shared_informer.go:320] Caches are synced for service config
	I0815 18:41:33.696254       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ef05ad509ee70652c70b4613499d46bbce0b9aa17ab7204b38372de527733a29] <==
	W0815 18:41:24.409881       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 18:41:24.410031       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0815 18:41:25.208274       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 18:41:25.208416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.219381       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 18:41:25.219594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.228358       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 18:41:25.228452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.287166       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 18:41:25.287396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.293995       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 18:41:25.294073       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.390122       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 18:41:25.390263       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.392396       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 18:41:25.392516       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.491629       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 18:41:25.492943       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.557608       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 18:41:25.558013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.682556       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 18:41:25.682595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 18:41:25.793587       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 18:41:25.793822       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0815 18:41:28.390823       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 18:56:37 embed-certs-555028 kubelet[2920]: E0815 18:56:37.481128    2920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748197480170364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:56:37 embed-certs-555028 kubelet[2920]: E0815 18:56:37.481259    2920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748197480170364,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:56:42 embed-certs-555028 kubelet[2920]: E0815 18:56:42.215340    2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zkpx5" podUID="92e18af9-7bd1-4891-b551-06ba4b293560"
	Aug 15 18:56:47 embed-certs-555028 kubelet[2920]: E0815 18:56:47.483747    2920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748207483372524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:56:47 embed-certs-555028 kubelet[2920]: E0815 18:56:47.483791    2920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748207483372524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:56:54 embed-certs-555028 kubelet[2920]: E0815 18:56:54.214844    2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zkpx5" podUID="92e18af9-7bd1-4891-b551-06ba4b293560"
	Aug 15 18:56:57 embed-certs-555028 kubelet[2920]: E0815 18:56:57.488343    2920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748217488099457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:56:57 embed-certs-555028 kubelet[2920]: E0815 18:56:57.488387    2920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748217488099457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:07 embed-certs-555028 kubelet[2920]: E0815 18:57:07.490565    2920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748227490228752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:07 embed-certs-555028 kubelet[2920]: E0815 18:57:07.490830    2920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748227490228752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:08 embed-certs-555028 kubelet[2920]: E0815 18:57:08.214408    2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zkpx5" podUID="92e18af9-7bd1-4891-b551-06ba4b293560"
	Aug 15 18:57:17 embed-certs-555028 kubelet[2920]: E0815 18:57:17.492429    2920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748237491998313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:17 embed-certs-555028 kubelet[2920]: E0815 18:57:17.492460    2920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748237491998313,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:19 embed-certs-555028 kubelet[2920]: E0815 18:57:19.231797    2920 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 15 18:57:19 embed-certs-555028 kubelet[2920]: E0815 18:57:19.231883    2920 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Aug 15 18:57:19 embed-certs-555028 kubelet[2920]: E0815 18:57:19.232159    2920 kuberuntime_manager.go:1272] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5g5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-zkpx5_kube-system(92e18af9-7bd1-4891-b551-06ba4b293560): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Aug 15 18:57:19 embed-certs-555028 kubelet[2920]: E0815 18:57:19.233609    2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-zkpx5" podUID="92e18af9-7bd1-4891-b551-06ba4b293560"
	Aug 15 18:57:27 embed-certs-555028 kubelet[2920]: E0815 18:57:27.233330    2920 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 18:57:27 embed-certs-555028 kubelet[2920]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 18:57:27 embed-certs-555028 kubelet[2920]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 18:57:27 embed-certs-555028 kubelet[2920]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 18:57:27 embed-certs-555028 kubelet[2920]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 18:57:27 embed-certs-555028 kubelet[2920]: E0815 18:57:27.496565    2920 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748247495996270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:27 embed-certs-555028 kubelet[2920]: E0815 18:57:27.496690    2920 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748247495996270,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134124,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:31 embed-certs-555028 kubelet[2920]: E0815 18:57:31.215582    2920 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zkpx5" podUID="92e18af9-7bd1-4891-b551-06ba4b293560"
	
	
	==> storage-provisioner [bdbe7e7cc12a24a7735aca0a3420aa993a88b5226b3fe7139154d5de11e8a2cd] <==
	I0815 18:41:34.051569       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 18:41:34.062388       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 18:41:34.063170       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 18:41:34.071935       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 18:41:34.072084       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-555028_b6ffbb90-3015-45d1-8de4-797eb7674e8e!
	I0815 18:41:34.073226       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a1875244-4cca-4562-be1c-7ec3504412a3", APIVersion:"v1", ResourceVersion:"397", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-555028_b6ffbb90-3015-45d1-8de4-797eb7674e8e became leader
	I0815 18:41:34.172779       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-555028_b6ffbb90-3015-45d1-8de4-797eb7674e8e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-555028 -n embed-certs-555028
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-555028 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-zkpx5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-555028 describe pod metrics-server-6867b74b74-zkpx5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-555028 describe pod metrics-server-6867b74b74-zkpx5: exit status 1 (68.77319ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-zkpx5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-555028 describe pod metrics-server-6867b74b74-zkpx5: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (415.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (378.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-599042 -n no-preload-599042
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-08-15 18:57:31.798104223 +0000 UTC m=+6742.776209389
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-599042 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-599042 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.619µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-599042 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-599042 -n no-preload-599042
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-599042 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-599042 logs -n 25: (1.290552065s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:29 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-599042             | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-555028            | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-423062  | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-278865        | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-599042                  | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC | 15 Aug 24 18:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-555028                 | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-423062       | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-278865             | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:55 UTC | 15 Aug 24 18:55 UTC |
	| start   | -p newest-cni-828957 --memory=2200 --alsologtostderr   | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:55 UTC | 15 Aug 24 18:56 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-828957             | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:56 UTC | 15 Aug 24 18:56 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-828957                                   | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:56 UTC | 15 Aug 24 18:56 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-828957                  | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:56 UTC | 15 Aug 24 18:56 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-828957 --memory=2200 --alsologtostderr   | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:56 UTC | 15 Aug 24 18:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| image   | newest-cni-828957 image list                           | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:57 UTC | 15 Aug 24 18:57 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-828957                                   | newest-cni-828957            | jenkins | v1.33.1 | 15 Aug 24 18:57 UTC |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:56:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:56:54.556693   75302 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:56:54.556821   75302 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:56:54.556831   75302 out.go:358] Setting ErrFile to fd 2...
	I0815 18:56:54.556837   75302 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:56:54.557001   75302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:56:54.557550   75302 out.go:352] Setting JSON to false
	I0815 18:56:54.558445   75302 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":9561,"bootTime":1723738654,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:56:54.558503   75302 start.go:139] virtualization: kvm guest
	I0815 18:56:54.560591   75302 out.go:177] * [newest-cni-828957] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:56:54.561871   75302 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:56:54.561900   75302 notify.go:220] Checking for updates...
	I0815 18:56:54.564383   75302 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:56:54.565610   75302 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:56:54.566729   75302 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:56:54.568014   75302 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:56:54.569571   75302 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:56:54.571246   75302 config.go:182] Loaded profile config "newest-cni-828957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:56:54.571652   75302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:56:54.571687   75302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:56:54.587068   75302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44109
	I0815 18:56:54.587430   75302 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:56:54.587891   75302 main.go:141] libmachine: Using API Version  1
	I0815 18:56:54.587933   75302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:56:54.588252   75302 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:56:54.588444   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	I0815 18:56:54.588759   75302 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:56:54.589081   75302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:56:54.589118   75302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:56:54.604322   75302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37237
	I0815 18:56:54.604752   75302 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:56:54.605217   75302 main.go:141] libmachine: Using API Version  1
	I0815 18:56:54.605240   75302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:56:54.605562   75302 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:56:54.605697   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	I0815 18:56:54.641898   75302 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:56:54.643228   75302 start.go:297] selected driver: kvm2
	I0815 18:56:54.643245   75302 start.go:901] validating driver "kvm2" against &{Name:newest-cni-828957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:newest-cni-828957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Star
tHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:56:54.643361   75302 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:56:54.644066   75302 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:56:54.644156   75302 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:56:54.658480   75302 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:56:54.658832   75302 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0815 18:56:54.658892   75302 cni.go:84] Creating CNI manager for ""
	I0815 18:56:54.658909   75302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:56:54.658943   75302 start.go:340] cluster config:
	{Name:newest-cni-828957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-828957 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:56:54.659033   75302 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:56:54.661410   75302 out.go:177] * Starting "newest-cni-828957" primary control-plane node in "newest-cni-828957" cluster
	I0815 18:56:54.662779   75302 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:56:54.662827   75302 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:56:54.662836   75302 cache.go:56] Caching tarball of preloaded images
	I0815 18:56:54.662919   75302 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:56:54.662932   75302 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on crio
	I0815 18:56:54.663032   75302 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/newest-cni-828957/config.json ...
	I0815 18:56:54.663209   75302 start.go:360] acquireMachinesLock for newest-cni-828957: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:56:54.663247   75302 start.go:364] duration metric: took 22.251µs to acquireMachinesLock for "newest-cni-828957"
	I0815 18:56:54.663265   75302 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:56:54.663286   75302 fix.go:54] fixHost starting: 
	I0815 18:56:54.663553   75302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:56:54.663587   75302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:56:54.678652   75302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43499
	I0815 18:56:54.679080   75302 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:56:54.679578   75302 main.go:141] libmachine: Using API Version  1
	I0815 18:56:54.679598   75302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:56:54.679907   75302 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:56:54.680154   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	I0815 18:56:54.680319   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetState
	I0815 18:56:54.682007   75302 fix.go:112] recreateIfNeeded on newest-cni-828957: state=Stopped err=<nil>
	I0815 18:56:54.682035   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	W0815 18:56:54.682218   75302 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:56:54.684073   75302 out.go:177] * Restarting existing kvm2 VM for "newest-cni-828957" ...
	I0815 18:56:54.685540   75302 main.go:141] libmachine: (newest-cni-828957) Calling .Start
	I0815 18:56:54.685716   75302 main.go:141] libmachine: (newest-cni-828957) Ensuring networks are active...
	I0815 18:56:54.686547   75302 main.go:141] libmachine: (newest-cni-828957) Ensuring network default is active
	I0815 18:56:54.686934   75302 main.go:141] libmachine: (newest-cni-828957) Ensuring network mk-newest-cni-828957 is active
	I0815 18:56:54.687332   75302 main.go:141] libmachine: (newest-cni-828957) Getting domain xml...
	I0815 18:56:54.688166   75302 main.go:141] libmachine: (newest-cni-828957) Creating domain...
	I0815 18:56:55.953792   75302 main.go:141] libmachine: (newest-cni-828957) Waiting to get IP...
	I0815 18:56:55.954732   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:56:55.955168   75302 main.go:141] libmachine: (newest-cni-828957) DBG | unable to find current IP address of domain newest-cni-828957 in network mk-newest-cni-828957
	I0815 18:56:55.955231   75302 main.go:141] libmachine: (newest-cni-828957) DBG | I0815 18:56:55.955164   75337 retry.go:31] will retry after 197.257419ms: waiting for machine to come up
	I0815 18:56:56.153479   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:56:56.153969   75302 main.go:141] libmachine: (newest-cni-828957) DBG | unable to find current IP address of domain newest-cni-828957 in network mk-newest-cni-828957
	I0815 18:56:56.153999   75302 main.go:141] libmachine: (newest-cni-828957) DBG | I0815 18:56:56.153934   75337 retry.go:31] will retry after 324.200416ms: waiting for machine to come up
	I0815 18:56:56.479399   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:56:56.479841   75302 main.go:141] libmachine: (newest-cni-828957) DBG | unable to find current IP address of domain newest-cni-828957 in network mk-newest-cni-828957
	I0815 18:56:56.479866   75302 main.go:141] libmachine: (newest-cni-828957) DBG | I0815 18:56:56.479802   75337 retry.go:31] will retry after 356.653975ms: waiting for machine to come up
	I0815 18:56:56.838448   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:56:56.838955   75302 main.go:141] libmachine: (newest-cni-828957) DBG | unable to find current IP address of domain newest-cni-828957 in network mk-newest-cni-828957
	I0815 18:56:56.838976   75302 main.go:141] libmachine: (newest-cni-828957) DBG | I0815 18:56:56.838914   75337 retry.go:31] will retry after 450.786548ms: waiting for machine to come up
	I0815 18:56:57.291634   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:56:57.292036   75302 main.go:141] libmachine: (newest-cni-828957) DBG | unable to find current IP address of domain newest-cni-828957 in network mk-newest-cni-828957
	I0815 18:56:57.292070   75302 main.go:141] libmachine: (newest-cni-828957) DBG | I0815 18:56:57.291987   75337 retry.go:31] will retry after 644.690292ms: waiting for machine to come up
	I0815 18:56:57.938584   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:56:57.939172   75302 main.go:141] libmachine: (newest-cni-828957) DBG | unable to find current IP address of domain newest-cni-828957 in network mk-newest-cni-828957
	I0815 18:56:57.939202   75302 main.go:141] libmachine: (newest-cni-828957) DBG | I0815 18:56:57.939058   75337 retry.go:31] will retry after 744.836838ms: waiting for machine to come up
	I0815 18:56:58.685208   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:56:58.685737   75302 main.go:141] libmachine: (newest-cni-828957) DBG | unable to find current IP address of domain newest-cni-828957 in network mk-newest-cni-828957
	I0815 18:56:58.685760   75302 main.go:141] libmachine: (newest-cni-828957) DBG | I0815 18:56:58.685668   75337 retry.go:31] will retry after 1.055463195s: waiting for machine to come up
	I0815 18:56:59.742167   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:56:59.742808   75302 main.go:141] libmachine: (newest-cni-828957) DBG | unable to find current IP address of domain newest-cni-828957 in network mk-newest-cni-828957
	I0815 18:56:59.742854   75302 main.go:141] libmachine: (newest-cni-828957) DBG | I0815 18:56:59.742755   75337 retry.go:31] will retry after 969.642167ms: waiting for machine to come up
	I0815 18:57:00.714037   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:00.714530   75302 main.go:141] libmachine: (newest-cni-828957) DBG | unable to find current IP address of domain newest-cni-828957 in network mk-newest-cni-828957
	I0815 18:57:00.714558   75302 main.go:141] libmachine: (newest-cni-828957) DBG | I0815 18:57:00.714483   75337 retry.go:31] will retry after 1.195929127s: waiting for machine to come up
	I0815 18:57:01.911830   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:01.912332   75302 main.go:141] libmachine: (newest-cni-828957) DBG | unable to find current IP address of domain newest-cni-828957 in network mk-newest-cni-828957
	I0815 18:57:01.912358   75302 main.go:141] libmachine: (newest-cni-828957) DBG | I0815 18:57:01.912300   75337 retry.go:31] will retry after 1.456136256s: waiting for machine to come up
	I0815 18:57:03.369662   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:03.370268   75302 main.go:141] libmachine: (newest-cni-828957) DBG | unable to find current IP address of domain newest-cni-828957 in network mk-newest-cni-828957
	I0815 18:57:03.370297   75302 main.go:141] libmachine: (newest-cni-828957) DBG | I0815 18:57:03.370202   75337 retry.go:31] will retry after 2.639413692s: waiting for machine to come up
	I0815 18:57:06.011843   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:06.012301   75302 main.go:141] libmachine: (newest-cni-828957) DBG | unable to find current IP address of domain newest-cni-828957 in network mk-newest-cni-828957
	I0815 18:57:06.012346   75302 main.go:141] libmachine: (newest-cni-828957) DBG | I0815 18:57:06.012234   75337 retry.go:31] will retry after 3.376353422s: waiting for machine to come up
	I0815 18:57:09.390971   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:09.391604   75302 main.go:141] libmachine: (newest-cni-828957) DBG | unable to find current IP address of domain newest-cni-828957 in network mk-newest-cni-828957
	I0815 18:57:09.391636   75302 main.go:141] libmachine: (newest-cni-828957) DBG | I0815 18:57:09.391559   75337 retry.go:31] will retry after 3.779729263s: waiting for machine to come up
	I0815 18:57:13.175962   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.176510   75302 main.go:141] libmachine: (newest-cni-828957) Found IP for machine: 192.168.39.8
	I0815 18:57:13.176535   75302 main.go:141] libmachine: (newest-cni-828957) Reserving static IP address...
	I0815 18:57:13.176550   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has current primary IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.177052   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "newest-cni-828957", mac: "52:54:00:6c:09:a9", ip: "192.168.39.8"} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:13.177102   75302 main.go:141] libmachine: (newest-cni-828957) DBG | skip adding static IP to network mk-newest-cni-828957 - found existing host DHCP lease matching {name: "newest-cni-828957", mac: "52:54:00:6c:09:a9", ip: "192.168.39.8"}
	I0815 18:57:13.177115   75302 main.go:141] libmachine: (newest-cni-828957) Reserved static IP address: 192.168.39.8
	I0815 18:57:13.177129   75302 main.go:141] libmachine: (newest-cni-828957) DBG | Getting to WaitForSSH function...
	I0815 18:57:13.177143   75302 main.go:141] libmachine: (newest-cni-828957) Waiting for SSH to be available...
	I0815 18:57:13.179384   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.179785   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:13.179815   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.179883   75302 main.go:141] libmachine: (newest-cni-828957) DBG | Using SSH client type: external
	I0815 18:57:13.179936   75302 main.go:141] libmachine: (newest-cni-828957) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/newest-cni-828957/id_rsa (-rw-------)
	I0815 18:57:13.179962   75302 main.go:141] libmachine: (newest-cni-828957) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/newest-cni-828957/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:57:13.179975   75302 main.go:141] libmachine: (newest-cni-828957) DBG | About to run SSH command:
	I0815 18:57:13.179984   75302 main.go:141] libmachine: (newest-cni-828957) DBG | exit 0
	I0815 18:57:13.308840   75302 main.go:141] libmachine: (newest-cni-828957) DBG | SSH cmd err, output: <nil>: 
	I0815 18:57:13.309198   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetConfigRaw
	I0815 18:57:13.309848   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetIP
	I0815 18:57:13.312728   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.313090   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:13.313126   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.313350   75302 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/newest-cni-828957/config.json ...
	I0815 18:57:13.313576   75302 machine.go:93] provisionDockerMachine start ...
	I0815 18:57:13.313595   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	I0815 18:57:13.313818   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:13.315903   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.316267   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:13.316294   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.316371   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHPort
	I0815 18:57:13.316549   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:13.316698   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:13.316833   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHUsername
	I0815 18:57:13.316997   75302 main.go:141] libmachine: Using SSH client type: native
	I0815 18:57:13.317173   75302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0815 18:57:13.317183   75302 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:57:13.433178   75302 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:57:13.433205   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetMachineName
	I0815 18:57:13.433444   75302 buildroot.go:166] provisioning hostname "newest-cni-828957"
	I0815 18:57:13.433474   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetMachineName
	I0815 18:57:13.433675   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:13.436391   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.436858   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:13.436889   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.437060   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHPort
	I0815 18:57:13.437256   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:13.437477   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:13.437647   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHUsername
	I0815 18:57:13.437831   75302 main.go:141] libmachine: Using SSH client type: native
	I0815 18:57:13.438007   75302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0815 18:57:13.438020   75302 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-828957 && echo "newest-cni-828957" | sudo tee /etc/hostname
	I0815 18:57:13.567854   75302 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-828957
	
	I0815 18:57:13.567897   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:13.570598   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.570898   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:13.570935   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.571118   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHPort
	I0815 18:57:13.571320   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:13.571498   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:13.571690   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHUsername
	I0815 18:57:13.571902   75302 main.go:141] libmachine: Using SSH client type: native
	I0815 18:57:13.572092   75302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0815 18:57:13.572116   75302 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-828957' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-828957/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-828957' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:57:13.694079   75302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:57:13.694111   75302 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:57:13.694159   75302 buildroot.go:174] setting up certificates
	I0815 18:57:13.694167   75302 provision.go:84] configureAuth start
	I0815 18:57:13.694181   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetMachineName
	I0815 18:57:13.694491   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetIP
	I0815 18:57:13.697359   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.697742   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:13.697767   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.697948   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:13.700125   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.700503   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:13.700544   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.700634   75302 provision.go:143] copyHostCerts
	I0815 18:57:13.700709   75302 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:57:13.700725   75302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:57:13.700803   75302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:57:13.700910   75302 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:57:13.700920   75302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:57:13.700960   75302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:57:13.701030   75302 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:57:13.701040   75302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:57:13.701091   75302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:57:13.701161   75302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.newest-cni-828957 san=[127.0.0.1 192.168.39.8 localhost minikube newest-cni-828957]
	I0815 18:57:13.843333   75302 provision.go:177] copyRemoteCerts
	I0815 18:57:13.843393   75302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:57:13.843418   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:13.846173   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.846542   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:13.846573   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:13.846832   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHPort
	I0815 18:57:13.847022   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:13.847204   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHUsername
	I0815 18:57:13.847352   75302 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/newest-cni-828957/id_rsa Username:docker}
	I0815 18:57:13.934960   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 18:57:13.959663   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:57:13.984596   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:57:14.009476   75302 provision.go:87] duration metric: took 315.294495ms to configureAuth
	I0815 18:57:14.009504   75302 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:57:14.009682   75302 config.go:182] Loaded profile config "newest-cni-828957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:57:14.009745   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:14.012612   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:14.013012   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:14.013043   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:14.013185   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHPort
	I0815 18:57:14.013362   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:14.013521   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:14.013666   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHUsername
	I0815 18:57:14.013828   75302 main.go:141] libmachine: Using SSH client type: native
	I0815 18:57:14.014011   75302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0815 18:57:14.014032   75302 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:57:14.306295   75302 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:57:14.306328   75302 machine.go:96] duration metric: took 992.738693ms to provisionDockerMachine
	I0815 18:57:14.306341   75302 start.go:293] postStartSetup for "newest-cni-828957" (driver="kvm2")
	I0815 18:57:14.306357   75302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:57:14.306374   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	I0815 18:57:14.306693   75302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:57:14.306721   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:14.309320   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:14.309748   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:14.309788   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:14.309940   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHPort
	I0815 18:57:14.310160   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:14.310382   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHUsername
	I0815 18:57:14.310552   75302 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/newest-cni-828957/id_rsa Username:docker}
	I0815 18:57:14.400176   75302 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:57:14.404683   75302 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:57:14.404705   75302 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:57:14.404784   75302 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:57:14.404880   75302 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:57:14.404993   75302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:57:14.414493   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:57:14.438594   75302 start.go:296] duration metric: took 132.238198ms for postStartSetup
	I0815 18:57:14.438644   75302 fix.go:56] duration metric: took 19.775369755s for fixHost
	I0815 18:57:14.438669   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:14.441535   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:14.441932   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:14.441982   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:14.442088   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHPort
	I0815 18:57:14.442274   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:14.442450   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:14.442600   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHUsername
	I0815 18:57:14.442759   75302 main.go:141] libmachine: Using SSH client type: native
	I0815 18:57:14.442914   75302 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.8 22 <nil> <nil>}
	I0815 18:57:14.442925   75302 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:57:14.553247   75302 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723748234.527859514
	
	I0815 18:57:14.553265   75302 fix.go:216] guest clock: 1723748234.527859514
	I0815 18:57:14.553275   75302 fix.go:229] Guest: 2024-08-15 18:57:14.527859514 +0000 UTC Remote: 2024-08-15 18:57:14.438649151 +0000 UTC m=+19.917108767 (delta=89.210363ms)
	I0815 18:57:14.553311   75302 fix.go:200] guest clock delta is within tolerance: 89.210363ms
	I0815 18:57:14.553318   75302 start.go:83] releasing machines lock for "newest-cni-828957", held for 19.890061466s
	I0815 18:57:14.553341   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	I0815 18:57:14.553589   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetIP
	I0815 18:57:14.556214   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:14.556508   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:14.556553   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:14.556660   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	I0815 18:57:14.557225   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	I0815 18:57:14.557399   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	I0815 18:57:14.557478   75302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:57:14.557525   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:14.557593   75302 ssh_runner.go:195] Run: cat /version.json
	I0815 18:57:14.557614   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:14.560378   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:14.560670   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:14.560698   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:14.560719   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:14.560883   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHPort
	I0815 18:57:14.561113   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:14.561118   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:14.561142   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:14.561309   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHPort
	I0815 18:57:14.561406   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHUsername
	I0815 18:57:14.561456   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:14.561623   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHUsername
	I0815 18:57:14.561627   75302 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/newest-cni-828957/id_rsa Username:docker}
	I0815 18:57:14.561757   75302 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/newest-cni-828957/id_rsa Username:docker}
	I0815 18:57:14.664911   75302 ssh_runner.go:195] Run: systemctl --version
	I0815 18:57:14.671074   75302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:57:14.815175   75302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:57:14.822285   75302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:57:14.822362   75302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:57:14.840114   75302 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:57:14.840136   75302 start.go:495] detecting cgroup driver to use...
	I0815 18:57:14.840188   75302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:57:14.856345   75302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:57:14.870738   75302 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:57:14.870799   75302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:57:14.884527   75302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:57:14.897779   75302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:57:15.015329   75302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:57:15.165633   75302 docker.go:233] disabling docker service ...
	I0815 18:57:15.165705   75302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:57:15.181808   75302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:57:15.196986   75302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:57:15.350533   75302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:57:15.463188   75302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:57:15.476673   75302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:57:15.495594   75302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:57:15.495662   75302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:15.505856   75302 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:57:15.505920   75302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:15.518491   75302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:15.529395   75302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:15.539789   75302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:57:15.550433   75302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:15.561873   75302 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:15.578853   75302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:57:15.589397   75302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:57:15.598483   75302 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:57:15.598540   75302 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:57:15.611435   75302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:57:15.626286   75302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:57:15.752111   75302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:57:15.897407   75302 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:57:15.897477   75302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:57:15.902965   75302 start.go:563] Will wait 60s for crictl version
	I0815 18:57:15.903020   75302 ssh_runner.go:195] Run: which crictl
	I0815 18:57:15.907096   75302 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:57:15.949906   75302 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:57:15.949981   75302 ssh_runner.go:195] Run: crio --version
	I0815 18:57:15.979413   75302 ssh_runner.go:195] Run: crio --version
	I0815 18:57:16.012082   75302 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:57:16.013415   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetIP
	I0815 18:57:16.016199   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:16.016586   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:16.016614   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:16.016880   75302 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:57:16.021047   75302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:57:16.036086   75302 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0815 18:57:16.037622   75302 kubeadm.go:883] updating cluster {Name:newest-cni-828957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:newest-cni-828957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m
0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:57:16.037741   75302 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:57:16.037799   75302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:57:16.074454   75302 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:57:16.074519   75302 ssh_runner.go:195] Run: which lz4
	I0815 18:57:16.078584   75302 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:57:16.082939   75302 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:57:16.082974   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:57:17.475287   75302 crio.go:462] duration metric: took 1.39672829s to copy over tarball
	I0815 18:57:17.475366   75302 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:57:19.676479   75302 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.201085076s)
	I0815 18:57:19.676523   75302 crio.go:469] duration metric: took 2.201210231s to extract the tarball
	I0815 18:57:19.676531   75302 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:57:19.714510   75302 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:57:19.760343   75302 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:57:19.760367   75302 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:57:19.760375   75302 kubeadm.go:934] updating node { 192.168.39.8 8443 v1.31.0 crio true true} ...
	I0815 18:57:19.760517   75302 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-828957 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:newest-cni-828957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:57:19.760608   75302 ssh_runner.go:195] Run: crio config
	I0815 18:57:19.804144   75302 cni.go:84] Creating CNI manager for ""
	I0815 18:57:19.804162   75302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:57:19.804171   75302 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0815 18:57:19.804193   75302 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.8 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-828957 NodeName:newest-cni-828957 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[]
NodeIP:192.168.39.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:57:19.804338   75302 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-828957"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.8
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.8"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:57:19.804414   75302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:57:19.816198   75302 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:57:19.816292   75302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:57:19.826960   75302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (352 bytes)
	I0815 18:57:19.843752   75302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:57:19.860863   75302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2279 bytes)
	I0815 18:57:19.880006   75302 ssh_runner.go:195] Run: grep 192.168.39.8	control-plane.minikube.internal$ /etc/hosts
	I0815 18:57:19.884202   75302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:57:19.897201   75302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:57:20.033995   75302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:57:20.053038   75302 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/newest-cni-828957 for IP: 192.168.39.8
	I0815 18:57:20.053065   75302 certs.go:194] generating shared ca certs ...
	I0815 18:57:20.053081   75302 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:57:20.053246   75302 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:57:20.053303   75302 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:57:20.053317   75302 certs.go:256] generating profile certs ...
	I0815 18:57:20.053435   75302 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/newest-cni-828957/client.key
	I0815 18:57:20.053504   75302 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/newest-cni-828957/apiserver.key.2bc497e1
	I0815 18:57:20.053564   75302 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/newest-cni-828957/proxy-client.key
	I0815 18:57:20.053698   75302 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:57:20.053726   75302 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:57:20.053735   75302 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:57:20.053757   75302 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:57:20.053781   75302 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:57:20.053802   75302 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:57:20.053838   75302 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:57:20.054472   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:57:20.087614   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:57:20.113316   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:57:20.138886   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:57:20.165119   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/newest-cni-828957/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 18:57:20.197933   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/newest-cni-828957/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 18:57:20.230230   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/newest-cni-828957/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:57:20.256715   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/newest-cni-828957/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:57:20.281644   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:57:20.305786   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:57:20.333649   75302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:57:20.360516   75302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:57:20.379029   75302 ssh_runner.go:195] Run: openssl version
	I0815 18:57:20.385741   75302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:57:20.396816   75302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:57:20.401363   75302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:57:20.401411   75302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:57:20.407387   75302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:57:20.418044   75302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:57:20.429266   75302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:57:20.433885   75302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:57:20.433964   75302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:57:20.439681   75302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:57:20.450251   75302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:57:20.460579   75302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:57:20.465054   75302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:57:20.465117   75302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:57:20.470588   75302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:57:20.480892   75302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:57:20.485329   75302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:57:20.492042   75302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:57:20.498381   75302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:57:20.504956   75302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:57:20.511603   75302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:57:20.517712   75302 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:57:20.523724   75302 kubeadm.go:392] StartCluster: {Name:newest-cni-828957 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:newest-cni-828957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:57:20.523810   75302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:57:20.523883   75302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:57:20.559824   75302 cri.go:89] found id: ""
	I0815 18:57:20.559882   75302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:57:20.570428   75302 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:57:20.570446   75302 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:57:20.570486   75302 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:57:20.580125   75302 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:57:20.581486   75302 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-828957" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:57:20.582460   75302 kubeconfig.go:62] /home/jenkins/minikube-integration/19450-13013/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-828957" cluster setting kubeconfig missing "newest-cni-828957" context setting]
	I0815 18:57:20.583751   75302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:57:20.585707   75302 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:57:20.595455   75302 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.8
	I0815 18:57:20.595484   75302 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:57:20.595496   75302 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:57:20.595537   75302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:57:20.634186   75302 cri.go:89] found id: ""
	I0815 18:57:20.634262   75302 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:57:20.650083   75302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:57:20.659943   75302 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:57:20.659965   75302 kubeadm.go:157] found existing configuration files:
	
	I0815 18:57:20.660025   75302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:57:20.668970   75302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:57:20.669029   75302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:57:20.678503   75302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:57:20.687465   75302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:57:20.687601   75302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:57:20.696976   75302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:57:20.706045   75302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:57:20.706108   75302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:57:20.715483   75302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:57:20.724310   75302 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:57:20.724372   75302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:57:20.733788   75302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:57:20.742814   75302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:57:20.868703   75302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:57:21.852864   75302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:57:22.085550   75302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:57:22.156982   75302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:57:22.237950   75302 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:57:22.238039   75302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:57:22.739123   75302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:57:23.238830   75302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:57:23.294504   75302 api_server.go:72] duration metric: took 1.056565509s to wait for apiserver process to appear ...
	I0815 18:57:23.294534   75302 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:57:23.294557   75302 api_server.go:253] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0815 18:57:23.295024   75302 api_server.go:269] stopped: https://192.168.39.8:8443/healthz: Get "https://192.168.39.8:8443/healthz": dial tcp 192.168.39.8:8443: connect: connection refused
	I0815 18:57:23.795656   75302 api_server.go:253] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0815 18:57:26.500012   75302 api_server.go:279] https://192.168.39.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:57:26.500046   75302 api_server.go:103] status: https://192.168.39.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:57:26.500062   75302 api_server.go:253] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0815 18:57:26.546308   75302 api_server.go:279] https://192.168.39.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:57:26.546346   75302 api_server.go:103] status: https://192.168.39.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:57:26.794597   75302 api_server.go:253] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0815 18:57:26.799212   75302 api_server.go:279] https://192.168.39.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:57:26.799241   75302 api_server.go:103] status: https://192.168.39.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:57:27.295518   75302 api_server.go:253] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0815 18:57:27.300450   75302 api_server.go:279] https://192.168.39.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:57:27.300485   75302 api_server.go:103] status: https://192.168.39.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:57:27.794819   75302 api_server.go:253] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0815 18:57:27.798834   75302 api_server.go:279] https://192.168.39.8:8443/healthz returned 200:
	ok
	I0815 18:57:27.805422   75302 api_server.go:141] control plane version: v1.31.0
	I0815 18:57:27.805459   75302 api_server.go:131] duration metric: took 4.510917792s to wait for apiserver health ...
	I0815 18:57:27.805467   75302 cni.go:84] Creating CNI manager for ""
	I0815 18:57:27.805474   75302 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:57:27.807441   75302 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:57:27.808919   75302 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:57:27.830417   75302 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:57:27.854303   75302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:57:27.873957   75302 system_pods.go:59] 8 kube-system pods found
	I0815 18:57:27.873992   75302 system_pods.go:61] "coredns-6f6b679f8f-7w89r" [7470c7c7-b4cb-4ae4-9581-39f2cb36d968] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:57:27.873999   75302 system_pods.go:61] "etcd-newest-cni-828957" [3d5846a8-1ba2-4369-8f9a-d561bfb92df7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:57:27.874007   75302 system_pods.go:61] "kube-apiserver-newest-cni-828957" [90da1a29-3b3f-4651-9deb-de004c548755] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:57:27.874013   75302 system_pods.go:61] "kube-controller-manager-newest-cni-828957" [46c24be2-884f-44e2-92b0-f1ac105a911e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:57:27.874019   75302 system_pods.go:61] "kube-proxy-4ctl2" [c4113fe1-fde1-4626-aaf4-3f890704a153] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 18:57:27.874031   75302 system_pods.go:61] "kube-scheduler-newest-cni-828957" [0bbce627-362a-4666-a060-04c6a75001f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:57:27.874038   75302 system_pods.go:61] "metrics-server-6867b74b74-rxncn" [cc3ae60f-c695-4936-a444-7c49ba95347f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:57:27.874043   75302 system_pods.go:61] "storage-provisioner" [c3821d96-434f-43b0-ba70-e291685ee864] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 18:57:27.874050   75302 system_pods.go:74] duration metric: took 19.724441ms to wait for pod list to return data ...
	I0815 18:57:27.874056   75302 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:57:27.881096   75302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:57:27.881126   75302 node_conditions.go:123] node cpu capacity is 2
	I0815 18:57:27.881136   75302 node_conditions.go:105] duration metric: took 7.075613ms to run NodePressure ...
	I0815 18:57:27.881185   75302 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:57:28.184975   75302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:57:28.198115   75302 ops.go:34] apiserver oom_adj: -16
	I0815 18:57:28.198141   75302 kubeadm.go:597] duration metric: took 7.627688485s to restartPrimaryControlPlane
	I0815 18:57:28.198150   75302 kubeadm.go:394] duration metric: took 7.674433965s to StartCluster
	I0815 18:57:28.198165   75302 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:57:28.198236   75302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:57:28.201197   75302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:57:28.201486   75302 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.8 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:57:28.201585   75302 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:57:28.201671   75302 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-828957"
	I0815 18:57:28.201689   75302 config.go:182] Loaded profile config "newest-cni-828957": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:57:28.201727   75302 addons.go:69] Setting dashboard=true in profile "newest-cni-828957"
	I0815 18:57:28.201731   75302 addons.go:69] Setting default-storageclass=true in profile "newest-cni-828957"
	I0815 18:57:28.201775   75302 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-828957"
	I0815 18:57:28.201751   75302 addons.go:69] Setting metrics-server=true in profile "newest-cni-828957"
	W0815 18:57:28.201788   75302 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:57:28.201795   75302 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-828957"
	I0815 18:57:28.201827   75302 host.go:66] Checking if "newest-cni-828957" exists ...
	I0815 18:57:28.201833   75302 addons.go:234] Setting addon metrics-server=true in "newest-cni-828957"
	W0815 18:57:28.201849   75302 addons.go:243] addon metrics-server should already be in state true
	I0815 18:57:28.201786   75302 addons.go:234] Setting addon dashboard=true in "newest-cni-828957"
	W0815 18:57:28.201920   75302 addons.go:243] addon dashboard should already be in state true
	I0815 18:57:28.201921   75302 host.go:66] Checking if "newest-cni-828957" exists ...
	I0815 18:57:28.201990   75302 host.go:66] Checking if "newest-cni-828957" exists ...
	I0815 18:57:28.202232   75302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:57:28.202255   75302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:57:28.202293   75302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:57:28.202297   75302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:57:28.202355   75302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:57:28.202369   75302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:57:28.202424   75302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:57:28.202441   75302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:57:28.204438   75302 out.go:177] * Verifying Kubernetes components...
	I0815 18:57:28.206615   75302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:57:28.218407   75302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45891
	I0815 18:57:28.218473   75302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39737
	I0815 18:57:28.218784   75302 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:57:28.218884   75302 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:57:28.219241   75302 main.go:141] libmachine: Using API Version  1
	I0815 18:57:28.219259   75302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:57:28.219397   75302 main.go:141] libmachine: Using API Version  1
	I0815 18:57:28.219422   75302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:57:28.219896   75302 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:57:28.219897   75302 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:57:28.220129   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetState
	I0815 18:57:28.220532   75302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:57:28.220578   75302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:57:28.221480   75302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35281
	I0815 18:57:28.221484   75302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43973
	I0815 18:57:28.221843   75302 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:57:28.221884   75302 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:57:28.222308   75302 main.go:141] libmachine: Using API Version  1
	I0815 18:57:28.222330   75302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:57:28.222417   75302 main.go:141] libmachine: Using API Version  1
	I0815 18:57:28.222437   75302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:57:28.222672   75302 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:57:28.224130   75302 addons.go:234] Setting addon default-storageclass=true in "newest-cni-828957"
	W0815 18:57:28.224145   75302 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:57:28.224164   75302 host.go:66] Checking if "newest-cni-828957" exists ...
	I0815 18:57:28.224462   75302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:57:28.224482   75302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:57:28.224707   75302 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:57:28.225062   75302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:57:28.225097   75302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:57:28.225240   75302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:57:28.225300   75302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:57:28.243769   75302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41701
	I0815 18:57:28.243780   75302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0815 18:57:28.244208   75302 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:57:28.244305   75302 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:57:28.244754   75302 main.go:141] libmachine: Using API Version  1
	I0815 18:57:28.244772   75302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:57:28.244897   75302 main.go:141] libmachine: Using API Version  1
	I0815 18:57:28.244918   75302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:57:28.245108   75302 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:57:28.245346   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetState
	I0815 18:57:28.245461   75302 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:57:28.245994   75302 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:57:28.246038   75302 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:57:28.247167   75302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44833
	I0815 18:57:28.247485   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	I0815 18:57:28.247751   75302 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:57:28.248281   75302 main.go:141] libmachine: Using API Version  1
	I0815 18:57:28.248304   75302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:57:28.248956   75302 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:57:28.249312   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetState
	I0815 18:57:28.250503   75302 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:57:28.250969   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	I0815 18:57:28.251772   75302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43611
	I0815 18:57:28.252185   75302 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:57:28.252202   75302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:57:28.252211   75302 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:57:28.252217   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:28.252874   75302 main.go:141] libmachine: Using API Version  1
	I0815 18:57:28.252896   75302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:57:28.253071   75302 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0815 18:57:28.253309   75302 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:57:28.253540   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetState
	I0815 18:57:28.255940   75302 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0815 18:57:28.256137   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:28.256457   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	I0815 18:57:28.256784   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:28.256812   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:28.256846   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHPort
	I0815 18:57:28.257059   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:28.257092   75302 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0815 18:57:28.257107   75302 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0815 18:57:28.257125   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:28.257185   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHUsername
	I0815 18:57:28.257333   75302 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/newest-cni-828957/id_rsa Username:docker}
	I0815 18:57:28.258130   75302 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:57:28.259468   75302 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:57:28.259480   75302 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:57:28.259492   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:28.259668   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:28.260115   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:28.260135   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:28.260280   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHPort
	I0815 18:57:28.260477   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:28.260690   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHUsername
	I0815 18:57:28.260831   75302 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/newest-cni-828957/id_rsa Username:docker}
	I0815 18:57:28.262973   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:28.263398   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:28.263472   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:28.263583   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHPort
	I0815 18:57:28.263746   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:28.263881   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHUsername
	I0815 18:57:28.264019   75302 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/newest-cni-828957/id_rsa Username:docker}
	I0815 18:57:28.269131   75302 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I0815 18:57:28.269436   75302 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:57:28.269854   75302 main.go:141] libmachine: Using API Version  1
	I0815 18:57:28.269873   75302 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:57:28.270109   75302 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:57:28.270247   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetState
	I0815 18:57:28.271494   75302 main.go:141] libmachine: (newest-cni-828957) Calling .DriverName
	I0815 18:57:28.271685   75302 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:57:28.271698   75302 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:57:28.271713   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHHostname
	I0815 18:57:28.274261   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:28.274532   75302 main.go:141] libmachine: (newest-cni-828957) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:09:a9", ip: ""} in network mk-newest-cni-828957: {Iface:virbr4 ExpiryTime:2024-08-15 19:57:06 +0000 UTC Type:0 Mac:52:54:00:6c:09:a9 Iaid: IPaddr:192.168.39.8 Prefix:24 Hostname:newest-cni-828957 Clientid:01:52:54:00:6c:09:a9}
	I0815 18:57:28.274547   75302 main.go:141] libmachine: (newest-cni-828957) DBG | domain newest-cni-828957 has defined IP address 192.168.39.8 and MAC address 52:54:00:6c:09:a9 in network mk-newest-cni-828957
	I0815 18:57:28.274710   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHPort
	I0815 18:57:28.274856   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHKeyPath
	I0815 18:57:28.274974   75302 main.go:141] libmachine: (newest-cni-828957) Calling .GetSSHUsername
	I0815 18:57:28.275086   75302 sshutil.go:53] new ssh client: &{IP:192.168.39.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/newest-cni-828957/id_rsa Username:docker}
	I0815 18:57:28.455833   75302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:57:28.492847   75302 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:57:28.492914   75302 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:57:28.510343   75302 api_server.go:72] duration metric: took 308.812762ms to wait for apiserver process to appear ...
	I0815 18:57:28.510373   75302 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:57:28.510395   75302 api_server.go:253] Checking apiserver healthz at https://192.168.39.8:8443/healthz ...
	I0815 18:57:28.517587   75302 api_server.go:279] https://192.168.39.8:8443/healthz returned 200:
	ok
	I0815 18:57:28.519557   75302 api_server.go:141] control plane version: v1.31.0
	I0815 18:57:28.519584   75302 api_server.go:131] duration metric: took 9.20213ms to wait for apiserver health ...
	I0815 18:57:28.519602   75302 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:57:28.527667   75302 system_pods.go:59] 8 kube-system pods found
	I0815 18:57:28.527705   75302 system_pods.go:61] "coredns-6f6b679f8f-7w89r" [7470c7c7-b4cb-4ae4-9581-39f2cb36d968] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:57:28.527717   75302 system_pods.go:61] "etcd-newest-cni-828957" [3d5846a8-1ba2-4369-8f9a-d561bfb92df7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:57:28.527730   75302 system_pods.go:61] "kube-apiserver-newest-cni-828957" [90da1a29-3b3f-4651-9deb-de004c548755] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:57:28.527741   75302 system_pods.go:61] "kube-controller-manager-newest-cni-828957" [46c24be2-884f-44e2-92b0-f1ac105a911e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:57:28.527751   75302 system_pods.go:61] "kube-proxy-4ctl2" [c4113fe1-fde1-4626-aaf4-3f890704a153] Running
	I0815 18:57:28.527759   75302 system_pods.go:61] "kube-scheduler-newest-cni-828957" [0bbce627-362a-4666-a060-04c6a75001f9] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:57:28.527769   75302 system_pods.go:61] "metrics-server-6867b74b74-rxncn" [cc3ae60f-c695-4936-a444-7c49ba95347f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:57:28.527775   75302 system_pods.go:61] "storage-provisioner" [c3821d96-434f-43b0-ba70-e291685ee864] Running
	I0815 18:57:28.527786   75302 system_pods.go:74] duration metric: took 8.176888ms to wait for pod list to return data ...
	I0815 18:57:28.527797   75302 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:57:28.541141   75302 default_sa.go:45] found service account: "default"
	I0815 18:57:28.541170   75302 default_sa.go:55] duration metric: took 13.367262ms for default service account to be created ...
	I0815 18:57:28.541183   75302 kubeadm.go:582] duration metric: took 339.66278ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0815 18:57:28.541202   75302 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:57:28.546030   75302 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:57:28.546060   75302 node_conditions.go:123] node cpu capacity is 2
	I0815 18:57:28.546072   75302 node_conditions.go:105] duration metric: took 4.864288ms to run NodePressure ...
	I0815 18:57:28.546085   75302 start.go:241] waiting for startup goroutines ...
	I0815 18:57:28.573317   75302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:57:28.574522   75302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:57:28.662185   75302 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0815 18:57:28.662211   75302 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0815 18:57:28.667145   75302 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:57:28.667168   75302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:57:28.699673   75302 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:57:28.699695   75302 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:57:28.742992   75302 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:57:28.743018   75302 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:57:28.751028   75302 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0815 18:57:28.751048   75302 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0815 18:57:28.803740   75302 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0815 18:57:28.803761   75302 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0815 18:57:28.862196   75302 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0815 18:57:28.862220   75302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0815 18:57:28.863583   75302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:57:28.904941   75302 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0815 18:57:28.904969   75302 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0815 18:57:29.002884   75302 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0815 18:57:29.002911   75302 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0815 18:57:29.107755   75302 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0815 18:57:29.107786   75302 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0815 18:57:29.194681   75302 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0815 18:57:29.194721   75302 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0815 18:57:29.228203   75302 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 18:57:29.228227   75302 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0815 18:57:29.262863   75302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 18:57:30.417272   75302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.842718212s)
	I0815 18:57:30.417333   75302 main.go:141] libmachine: Making call to close driver server
	I0815 18:57:30.417346   75302 main.go:141] libmachine: (newest-cni-828957) Calling .Close
	I0815 18:57:30.417763   75302 main.go:141] libmachine: (newest-cni-828957) DBG | Closing plugin on server side
	I0815 18:57:30.417793   75302 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:57:30.417807   75302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:57:30.417816   75302 main.go:141] libmachine: Making call to close driver server
	I0815 18:57:30.417858   75302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.844513082s)
	I0815 18:57:30.417865   75302 main.go:141] libmachine: (newest-cni-828957) Calling .Close
	I0815 18:57:30.417890   75302 main.go:141] libmachine: Making call to close driver server
	I0815 18:57:30.417902   75302 main.go:141] libmachine: (newest-cni-828957) Calling .Close
	I0815 18:57:30.418175   75302 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:57:30.418185   75302 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:57:30.418193   75302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:57:30.418205   75302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:57:30.418205   75302 main.go:141] libmachine: (newest-cni-828957) DBG | Closing plugin on server side
	I0815 18:57:30.418225   75302 main.go:141] libmachine: (newest-cni-828957) DBG | Closing plugin on server side
	I0815 18:57:30.418215   75302 main.go:141] libmachine: Making call to close driver server
	I0815 18:57:30.418384   75302 main.go:141] libmachine: (newest-cni-828957) Calling .Close
	I0815 18:57:30.418606   75302 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:57:30.418624   75302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:57:30.434067   75302 main.go:141] libmachine: Making call to close driver server
	I0815 18:57:30.434087   75302 main.go:141] libmachine: (newest-cni-828957) Calling .Close
	I0815 18:57:30.434413   75302 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:57:30.434455   75302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:57:30.449799   75302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.586174428s)
	I0815 18:57:30.449854   75302 main.go:141] libmachine: Making call to close driver server
	I0815 18:57:30.449873   75302 main.go:141] libmachine: (newest-cni-828957) Calling .Close
	I0815 18:57:30.450175   75302 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:57:30.450194   75302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:57:30.450219   75302 main.go:141] libmachine: Making call to close driver server
	I0815 18:57:30.450226   75302 main.go:141] libmachine: (newest-cni-828957) DBG | Closing plugin on server side
	I0815 18:57:30.450229   75302 main.go:141] libmachine: (newest-cni-828957) Calling .Close
	I0815 18:57:30.450523   75302 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:57:30.450539   75302 main.go:141] libmachine: (newest-cni-828957) DBG | Closing plugin on server side
	I0815 18:57:30.450550   75302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:57:30.450560   75302 addons.go:475] Verifying addon metrics-server=true in "newest-cni-828957"
	I0815 18:57:30.896707   75302 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.633787116s)
	I0815 18:57:30.896767   75302 main.go:141] libmachine: Making call to close driver server
	I0815 18:57:30.896784   75302 main.go:141] libmachine: (newest-cni-828957) Calling .Close
	I0815 18:57:30.897173   75302 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:57:30.897237   75302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:57:30.897251   75302 main.go:141] libmachine: Making call to close driver server
	I0815 18:57:30.897262   75302 main.go:141] libmachine: (newest-cni-828957) Calling .Close
	I0815 18:57:30.897204   75302 main.go:141] libmachine: (newest-cni-828957) DBG | Closing plugin on server side
	I0815 18:57:30.897533   75302 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:57:30.897547   75302 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:57:30.899066   75302 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-828957 addons enable metrics-server
	
	I0815 18:57:30.900891   75302 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0815 18:57:30.902175   75302 addons.go:510] duration metric: took 2.700588908s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0815 18:57:30.902218   75302 start.go:246] waiting for cluster config update ...
	I0815 18:57:30.902232   75302 start.go:255] writing updated cluster config ...
	I0815 18:57:30.902595   75302 ssh_runner.go:195] Run: rm -f paused
	I0815 18:57:30.953052   75302 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:57:30.954945   75302 out.go:177] * Done! kubectl is now configured to use "newest-cni-828957" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.442264689Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748252442231142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8e8e707-4923-4414-b9e2-1bab54e94096 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.442866894Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=387f68eb-f637-428a-b4d7-352abd0a7ffc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.442948007Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=387f68eb-f637-428a-b4d7-352abd0a7ffc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.443220976Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747099078648476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 593f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f67889c939c1c28f9d604fde6516fabfbdf1713a402fc1bb229d11db5af0a05,PodSandboxId:b2fbf56a4f219ec0cb5f6103ff5aa805c8ece23e530c5591cbcc84a7042479c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747078754762854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38120fa0-c110-4003-a0a2-ecf726f1a3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c,PodSandboxId:15895665850f1469a24f3cac28ff257e2468adfd83ef2d438062547e1710e688,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747075914637405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kpq9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9592b56d-a037-4212-86f2-29e5824626fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747068262730483,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
93f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791,PodSandboxId:0758c39c1907e8b7b52e57c51af54b47b5e46ed50dd5b2498463c979fceb45de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747068312810184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwb9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f286e9d-3035-4280-adff-d3ca5653c2
f8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27,PodSandboxId:1e552e5c3ce5d5d07939e63eaabe226524c2b54591bfc590eeae2d88cf4a2735,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747063560326166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92824f436589abf4cecd2cad2981043b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de,PodSandboxId:ae7eb74e81608640ff66458131e72a19d7976c4944b3a7a1c2b6f85a2f30277f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747063655462864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef872432fb4e315dd3151104265c9da6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f,PodSandboxId:de630a3983fd53e5b1a4ec27b5fa23dcbb61f069dbea8afcf8c1d8ef3ad6bb3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747063480456290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567329cb6993a54a0826ef2ad1abb690,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f,PodSandboxId:d20e0818100ae91fbf69e7d9cf3a3b7c8896b9df3a05deed98f248ddae1876e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747063425442460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202e22fcf9be3034b0f682399dce7ac3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=387f68eb-f637-428a-b4d7-352abd0a7ffc name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.488660877Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e4e83a1-78fe-4ce9-8316-6a7a68ab85f6 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.488751484Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e4e83a1-78fe-4ce9-8316-6a7a68ab85f6 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.492132243Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cef1d9fc-197b-41c8-bd59-c246bd1dfa3a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.493054238Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748252492557469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cef1d9fc-197b-41c8-bd59-c246bd1dfa3a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.493770870Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=230d873b-d2c7-408f-a81f-3a290b885c7b name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.493863152Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=230d873b-d2c7-408f-a81f-3a290b885c7b name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.494476012Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747099078648476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 593f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f67889c939c1c28f9d604fde6516fabfbdf1713a402fc1bb229d11db5af0a05,PodSandboxId:b2fbf56a4f219ec0cb5f6103ff5aa805c8ece23e530c5591cbcc84a7042479c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747078754762854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38120fa0-c110-4003-a0a2-ecf726f1a3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c,PodSandboxId:15895665850f1469a24f3cac28ff257e2468adfd83ef2d438062547e1710e688,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747075914637405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kpq9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9592b56d-a037-4212-86f2-29e5824626fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747068262730483,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
93f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791,PodSandboxId:0758c39c1907e8b7b52e57c51af54b47b5e46ed50dd5b2498463c979fceb45de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747068312810184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwb9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f286e9d-3035-4280-adff-d3ca5653c2
f8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27,PodSandboxId:1e552e5c3ce5d5d07939e63eaabe226524c2b54591bfc590eeae2d88cf4a2735,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747063560326166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92824f436589abf4cecd2cad2981043b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de,PodSandboxId:ae7eb74e81608640ff66458131e72a19d7976c4944b3a7a1c2b6f85a2f30277f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747063655462864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef872432fb4e315dd3151104265c9da6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f,PodSandboxId:de630a3983fd53e5b1a4ec27b5fa23dcbb61f069dbea8afcf8c1d8ef3ad6bb3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747063480456290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567329cb6993a54a0826ef2ad1abb690,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f,PodSandboxId:d20e0818100ae91fbf69e7d9cf3a3b7c8896b9df3a05deed98f248ddae1876e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747063425442460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202e22fcf9be3034b0f682399dce7ac3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=230d873b-d2c7-408f-a81f-3a290b885c7b name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.536069802Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8568aefd-489a-45a3-bd2f-54af395f8b06 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.536159517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8568aefd-489a-45a3-bd2f-54af395f8b06 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.537972713Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b394085d-0e12-4975-a3ba-9f7cfb0d8932 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.538340595Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748252538319003,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b394085d-0e12-4975-a3ba-9f7cfb0d8932 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.539468248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b67e975-e9ee-4e18-881e-afd2fe0a6a39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.539552597Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b67e975-e9ee-4e18-881e-afd2fe0a6a39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.539846435Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747099078648476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 593f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f67889c939c1c28f9d604fde6516fabfbdf1713a402fc1bb229d11db5af0a05,PodSandboxId:b2fbf56a4f219ec0cb5f6103ff5aa805c8ece23e530c5591cbcc84a7042479c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747078754762854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38120fa0-c110-4003-a0a2-ecf726f1a3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c,PodSandboxId:15895665850f1469a24f3cac28ff257e2468adfd83ef2d438062547e1710e688,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747075914637405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kpq9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9592b56d-a037-4212-86f2-29e5824626fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747068262730483,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
93f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791,PodSandboxId:0758c39c1907e8b7b52e57c51af54b47b5e46ed50dd5b2498463c979fceb45de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747068312810184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwb9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f286e9d-3035-4280-adff-d3ca5653c2
f8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27,PodSandboxId:1e552e5c3ce5d5d07939e63eaabe226524c2b54591bfc590eeae2d88cf4a2735,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747063560326166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92824f436589abf4cecd2cad2981043b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de,PodSandboxId:ae7eb74e81608640ff66458131e72a19d7976c4944b3a7a1c2b6f85a2f30277f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747063655462864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef872432fb4e315dd3151104265c9da6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f,PodSandboxId:de630a3983fd53e5b1a4ec27b5fa23dcbb61f069dbea8afcf8c1d8ef3ad6bb3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747063480456290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567329cb6993a54a0826ef2ad1abb690,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f,PodSandboxId:d20e0818100ae91fbf69e7d9cf3a3b7c8896b9df3a05deed98f248ddae1876e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747063425442460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202e22fcf9be3034b0f682399dce7ac3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b67e975-e9ee-4e18-881e-afd2fe0a6a39 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.583253811Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e0a5fc73-0211-4dc2-a866-003d84913a4f name=/runtime.v1.RuntimeService/Version
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.583368076Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e0a5fc73-0211-4dc2-a866-003d84913a4f name=/runtime.v1.RuntimeService/Version
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.584535434Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a1b79c2-b7a0-4279-95c6-bf46aa06048a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.585108301Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748252585082442,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a1b79c2-b7a0-4279-95c6-bf46aa06048a name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.585651965Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8ee7c895-f9b7-4603-b5f3-253b4c97862a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.585773649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8ee7c895-f9b7-4603-b5f3-253b4c97862a name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:57:32 no-preload-599042 crio[726]: time="2024-08-15 18:57:32.586068226Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1723747099078648476,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 593f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8f67889c939c1c28f9d604fde6516fabfbdf1713a402fc1bb229d11db5af0a05,PodSandboxId:b2fbf56a4f219ec0cb5f6103ff5aa805c8ece23e530c5591cbcc84a7042479c2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1723747078754762854,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 38120fa0-c110-4003-a0a2-ecf726f1a3b6,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c,PodSandboxId:15895665850f1469a24f3cac28ff257e2468adfd83ef2d438062547e1710e688,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1723747075914637405,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6f6b679f8f-kpq9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9592b56d-a037-4212-86f2-29e5824626fc,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420,PodSandboxId:d42babab0be95908aaad3c87a1a9be501d792426122fd6f7034db78572c623e1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1723747068262730483,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5
93f1bd8-17e0-471e-849c-d62d6ed5b14e,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791,PodSandboxId:0758c39c1907e8b7b52e57c51af54b47b5e46ed50dd5b2498463c979fceb45de,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494,State:CONTAINER_RUNNING,CreatedAt:1723747068312810184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bwb9h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f286e9d-3035-4280-adff-d3ca5653c2
f8,},Annotations:map[string]string{io.kubernetes.container.hash: 78ccb3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27,PodSandboxId:1e552e5c3ce5d5d07939e63eaabe226524c2b54591bfc590eeae2d88cf4a2735,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94,State:CONTAINER_RUNNING,CreatedAt:1723747063560326166,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 92824f436589abf4cecd2cad2981043b,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: f8fb4364,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de,PodSandboxId:ae7eb74e81608640ff66458131e72a19d7976c4944b3a7a1c2b6f85a2f30277f,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1723747063655462864,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ef872432fb4e315dd3151104265c9da6,},Annotations:map[string]string{io.kubernetes.containe
r.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f,PodSandboxId:de630a3983fd53e5b1a4ec27b5fa23dcbb61f069dbea8afcf8c1d8ef3ad6bb3f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3,State:CONTAINER_RUNNING,CreatedAt:1723747063480456290,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 567329cb6993a54a0826ef2ad1abb690,},Annotations:map[string]string{io.kubernetes.container.hash: f72d0
944,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f,PodSandboxId:d20e0818100ae91fbf69e7d9cf3a3b7c8896b9df3a05deed98f248ddae1876e6,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1,State:CONTAINER_RUNNING,CreatedAt:1723747063425442460,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-599042,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 202e22fcf9be3034b0f682399dce7ac3,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 3994b1a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8ee7c895-f9b7-4603-b5f3-253b4c97862a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	000b1f65df4e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   d42babab0be95       storage-provisioner
	8f67889c939c1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   b2fbf56a4f219       busybox
	ba61cbc99841c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      19 minutes ago      Running             coredns                   1                   15895665850f1       coredns-6f6b679f8f-kpq9m
	66df56dcd33cf       ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494                                      19 minutes ago      Running             kube-proxy                1                   0758c39c1907e       kube-proxy-bwb9h
	1a53d726afaa5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       1                   d42babab0be95       storage-provisioner
	f93d6e3cca40c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      19 minutes ago      Running             etcd                      1                   ae7eb74e81608       etcd-no-preload-599042
	74f2072bea476       1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94                                      19 minutes ago      Running             kube-scheduler            1                   1e552e5c3ce5d       kube-scheduler-no-preload-599042
	831a14c2b0bb2       604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3                                      19 minutes ago      Running             kube-apiserver            1                   de630a3983fd5       kube-apiserver-no-preload-599042
	c4afb41627fd6       045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1                                      19 minutes ago      Running             kube-controller-manager   1                   d20e0818100ae       kube-controller-manager-no-preload-599042
	
	
	==> coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53230 - 59055 "HINFO IN 998974764882245978.2108705576189184450. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015086063s
	
	
	==> describe nodes <==
	Name:               no-preload-599042
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-599042
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=no-preload-599042
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T18_28_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 18:28:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-599042
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 18:57:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 18:53:36 +0000   Thu, 15 Aug 2024 18:28:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 18:53:36 +0000   Thu, 15 Aug 2024 18:28:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 18:53:36 +0000   Thu, 15 Aug 2024 18:28:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 18:53:36 +0000   Thu, 15 Aug 2024 18:37:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.14
	  Hostname:    no-preload-599042
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e198536b9a0e45afb82f8ee8d9f6ab80
	  System UUID:                e198536b-9a0e-45af-b82f-8ee8d9f6ab80
	  Boot ID:                    878ff641-9d9f-4cb1-ae56-44926fece655
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-6f6b679f8f-kpq9m                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-no-preload-599042                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kube-apiserver-no-preload-599042             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-no-preload-599042    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-bwb9h                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-no-preload-599042             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 metrics-server-6867b74b74-djv7r              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node no-preload-599042 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node no-preload-599042 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node no-preload-599042 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-599042 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-599042 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-599042 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node no-preload-599042 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-599042 event: Registered Node no-preload-599042 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node no-preload-599042 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node no-preload-599042 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node no-preload-599042 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-599042 event: Registered Node no-preload-599042 in Controller
	
	
	==> dmesg <==
	[Aug15 18:37] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.058135] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043981] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.167378] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.640733] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.591032] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.799338] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.060697] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.055583] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.185665] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.120148] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.272466] systemd-fstab-generator[710]: Ignoring "noauto" option for root device
	[ +16.290318] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.054796] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.103469] systemd-fstab-generator[1430]: Ignoring "noauto" option for root device
	[  +4.435828] kauditd_printk_skb: 97 callbacks suppressed
	[  +3.595375] systemd-fstab-generator[2059]: Ignoring "noauto" option for root device
	[  +3.290422] kauditd_printk_skb: 61 callbacks suppressed
	[Aug15 18:38] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] <==
	{"level":"info","ts":"2024-08-15T18:52:45.865217Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2934746569,"revision":1081,"compact-revision":839}
	{"level":"warn","ts":"2024-08-15T18:56:14.095992Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.078911ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17270619950307404078 > lease_revoke:<id:6fad915754d8e8d4>","response":"size:28"}
	{"level":"info","ts":"2024-08-15T18:56:30.086911Z","caller":"traceutil/trace.go:171","msg":"trace[662848172] transaction","detail":"{read_only:false; response_revision:1508; number_of_response:1; }","duration":"127.715275ms","start":"2024-08-15T18:56:29.959160Z","end":"2024-08-15T18:56:30.086875Z","steps":["trace[662848172] 'process raft request'  (duration: 127.5362ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:56:30.921048Z","caller":"traceutil/trace.go:171","msg":"trace[1301188539] linearizableReadLoop","detail":"{readStateIndex:1773; appliedIndex:1772; }","duration":"264.53493ms","start":"2024-08-15T18:56:30.656494Z","end":"2024-08-15T18:56:30.921029Z","steps":["trace[1301188539] 'read index received'  (duration: 264.324261ms)","trace[1301188539] 'applied index is now lower than readState.Index'  (duration: 209.865µs)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:56:30.921248Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"264.673787ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:56:30.921323Z","caller":"traceutil/trace.go:171","msg":"trace[535510837] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1509; }","duration":"264.823184ms","start":"2024-08-15T18:56:30.656489Z","end":"2024-08-15T18:56:30.921312Z","steps":["trace[535510837] 'agreement among raft nodes before linearized reading'  (duration: 264.633992ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-15T18:56:30.921553Z","caller":"traceutil/trace.go:171","msg":"trace[1444314293] transaction","detail":"{read_only:false; response_revision:1509; number_of_response:1; }","duration":"321.44593ms","start":"2024-08-15T18:56:30.600094Z","end":"2024-08-15T18:56:30.921540Z","steps":["trace[1444314293] 'process raft request'  (duration: 320.802567ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:56:30.923302Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:56:30.600077Z","time spent":"322.305793ms","remote":"127.0.0.1:39690","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1507 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-08-15T18:57:21.937994Z","caller":"traceutil/trace.go:171","msg":"trace[2131222300] transaction","detail":"{read_only:false; response_revision:1548; number_of_response:1; }","duration":"731.077257ms","start":"2024-08-15T18:57:21.206897Z","end":"2024-08-15T18:57:21.937974Z","steps":["trace[2131222300] 'process raft request'  (duration: 730.952549ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:57:21.938280Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:57:21.206878Z","time spent":"731.319989ms","remote":"127.0.0.1:39690","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1547 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-08-15T18:57:22.264396Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.170066ms","expected-duration":"100ms","prefix":"","request":"header:<ID:17270619950307404478 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-wgdkkv3b4ikzdth62bdewbjx7a\" mod_revision:1540 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-wgdkkv3b4ikzdth62bdewbjx7a\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-wgdkkv3b4ikzdth62bdewbjx7a\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-08-15T18:57:22.264528Z","caller":"traceutil/trace.go:171","msg":"trace[860560885] linearizableReadLoop","detail":"{readStateIndex:1823; appliedIndex:1822; }","duration":"826.80117ms","start":"2024-08-15T18:57:21.437715Z","end":"2024-08-15T18:57:22.264516Z","steps":["trace[860560885] 'read index received'  (duration: 501.005686ms)","trace[860560885] 'applied index is now lower than readState.Index'  (duration: 325.794526ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:57:22.264752Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"827.027321ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:57:22.265687Z","caller":"traceutil/trace.go:171","msg":"trace[429413728] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1549; }","duration":"827.966369ms","start":"2024-08-15T18:57:21.437711Z","end":"2024-08-15T18:57:22.265677Z","steps":["trace[429413728] 'agreement among raft nodes before linearized reading'  (duration: 827.000147ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:57:22.265821Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:57:21.437678Z","time spent":"828.132672ms","remote":"127.0.0.1:39504","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-08-15T18:57:22.264844Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"805.843622ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:57:22.265917Z","caller":"traceutil/trace.go:171","msg":"trace[1141353578] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1549; }","duration":"806.92146ms","start":"2024-08-15T18:57:21.458987Z","end":"2024-08-15T18:57:22.265908Z","steps":["trace[1141353578] 'agreement among raft nodes before linearized reading'  (duration: 805.82358ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:57:22.265969Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:57:21.458949Z","time spent":"807.010604ms","remote":"127.0.0.1:39708","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-08-15T18:57:22.264887Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"608.15814ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:57:22.266108Z","caller":"traceutil/trace.go:171","msg":"trace[2033258123] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1549; }","duration":"609.379524ms","start":"2024-08-15T18:57:21.656723Z","end":"2024-08-15T18:57:22.266102Z","steps":["trace[2033258123] 'agreement among raft nodes before linearized reading'  (duration: 608.151918ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:57:22.264917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"634.903811ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-15T18:57:22.266788Z","caller":"traceutil/trace.go:171","msg":"trace[1883373309] range","detail":"{range_begin:/registry/validatingadmissionpolicybindings/; range_end:/registry/validatingadmissionpolicybindings0; response_count:0; response_revision:1549; }","duration":"636.770714ms","start":"2024-08-15T18:57:21.630006Z","end":"2024-08-15T18:57:22.266777Z","steps":["trace[1883373309] 'agreement among raft nodes before linearized reading'  (duration: 634.892925ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-15T18:57:22.266901Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:57:21.629974Z","time spent":"636.895208ms","remote":"127.0.0.1:40070","response type":"/etcdserverpb.KV/Range","request count":0,"request size":94,"response count":0,"response size":28,"request content":"key:\"/registry/validatingadmissionpolicybindings/\" range_end:\"/registry/validatingadmissionpolicybindings0\" count_only:true "}
	{"level":"info","ts":"2024-08-15T18:57:22.264950Z","caller":"traceutil/trace.go:171","msg":"trace[1063126066] transaction","detail":"{read_only:false; response_revision:1549; number_of_response:1; }","duration":"835.918713ms","start":"2024-08-15T18:57:21.429020Z","end":"2024-08-15T18:57:22.264939Z","steps":["trace[1063126066] 'process raft request'  (duration: 583.128081ms)","trace[1063126066] 'compare'  (duration: 252.062718ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-15T18:57:22.267213Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-15T18:57:21.429005Z","time spent":"838.167972ms","remote":"127.0.0.1:39770","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":682,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-wgdkkv3b4ikzdth62bdewbjx7a\" mod_revision:1540 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-wgdkkv3b4ikzdth62bdewbjx7a\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-wgdkkv3b4ikzdth62bdewbjx7a\" > >"}
	
	
	==> kernel <==
	 18:57:32 up 20 min,  0 users,  load average: 0.12, 0.11, 0.09
	Linux no-preload-599042 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] <==
	E0815 18:52:48.235680       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0815 18:52:48.235757       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 18:52:48.236841       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:52:48.236891       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 18:53:48.237612       1 handler_proxy.go:99] no RequestInfo found in the context
	W0815 18:53:48.237652       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:53:48.237890       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0815 18:53:48.237959       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 18:53:48.239091       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:53:48.239191       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0815 18:55:48.239556       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:55:48.239780       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0815 18:55:48.240118       1 handler_proxy.go:99] no RequestInfo found in the context
	E0815 18:55:48.240320       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0815 18:55:48.241277       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 18:55:48.242474       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] <==
	E0815 18:52:20.930982       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:52:21.417129       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:52:50.937036       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:52:51.425560       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:53:20.943507       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:53:21.433710       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:53:36.158560       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-599042"
	E0815 18:53:50.950120       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:53:51.444055       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0815 18:54:08.868983       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="210.369µs"
	I0815 18:54:19.866564       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="118.814µs"
	E0815 18:54:20.956374       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:54:21.450544       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:54:50.963856       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:54:51.459965       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:55:20.970925       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:55:21.467873       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:55:50.977452       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:55:51.474040       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:56:20.984290       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:56:21.480853       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:56:50.991268       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:56:51.488490       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0815 18:57:20.997135       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0815 18:57:21.497545       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0815 18:37:48.527552       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0815 18:37:48.536176       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.14"]
	E0815 18:37:48.536361       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 18:37:48.572735       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0815 18:37:48.572782       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0815 18:37:48.572807       1 server_linux.go:169] "Using iptables Proxier"
	I0815 18:37:48.575705       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 18:37:48.576062       1 server.go:483] "Version info" version="v1.31.0"
	I0815 18:37:48.576088       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:37:48.577783       1 config.go:197] "Starting service config controller"
	I0815 18:37:48.577823       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 18:37:48.577844       1 config.go:104] "Starting endpoint slice config controller"
	I0815 18:37:48.577848       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 18:37:48.579381       1 config.go:326] "Starting node config controller"
	I0815 18:37:48.579410       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 18:37:48.678624       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 18:37:48.678679       1 shared_informer.go:320] Caches are synced for service config
	I0815 18:37:48.679995       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] <==
	I0815 18:37:44.705906       1 serving.go:386] Generated self-signed cert in-memory
	W0815 18:37:47.165664       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 18:37:47.165848       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 18:37:47.165934       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 18:37:47.165959       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 18:37:47.253180       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0815 18:37:47.253329       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 18:37:47.256561       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0815 18:37:47.256766       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0815 18:37:47.256835       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0815 18:37:47.257161       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 18:37:47.357681       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 18:56:27 no-preload-599042 kubelet[1437]: E0815 18:56:27.851183    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-djv7r" podUID="3d03d5bc-31ed-4a75-8d75-627d40a2d8fc"
	Aug 15 18:56:33 no-preload-599042 kubelet[1437]: E0815 18:56:33.110214    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748193109717134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:56:33 no-preload-599042 kubelet[1437]: E0815 18:56:33.110627    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748193109717134,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:56:38 no-preload-599042 kubelet[1437]: E0815 18:56:38.856927    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-djv7r" podUID="3d03d5bc-31ed-4a75-8d75-627d40a2d8fc"
	Aug 15 18:56:42 no-preload-599042 kubelet[1437]: E0815 18:56:42.869561    1437 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 15 18:56:42 no-preload-599042 kubelet[1437]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 15 18:56:42 no-preload-599042 kubelet[1437]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 15 18:56:42 no-preload-599042 kubelet[1437]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 15 18:56:42 no-preload-599042 kubelet[1437]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 15 18:56:43 no-preload-599042 kubelet[1437]: E0815 18:56:43.112372    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748203111999346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:56:43 no-preload-599042 kubelet[1437]: E0815 18:56:43.112403    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748203111999346,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:56:52 no-preload-599042 kubelet[1437]: E0815 18:56:52.857320    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-djv7r" podUID="3d03d5bc-31ed-4a75-8d75-627d40a2d8fc"
	Aug 15 18:56:53 no-preload-599042 kubelet[1437]: E0815 18:56:53.114785    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748213114076891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:56:53 no-preload-599042 kubelet[1437]: E0815 18:56:53.114884    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748213114076891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:03 no-preload-599042 kubelet[1437]: E0815 18:57:03.117043    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748223116424540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:03 no-preload-599042 kubelet[1437]: E0815 18:57:03.117438    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748223116424540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:05 no-preload-599042 kubelet[1437]: E0815 18:57:05.851288    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-djv7r" podUID="3d03d5bc-31ed-4a75-8d75-627d40a2d8fc"
	Aug 15 18:57:13 no-preload-599042 kubelet[1437]: E0815 18:57:13.119183    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748233118914653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:13 no-preload-599042 kubelet[1437]: E0815 18:57:13.119244    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748233118914653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:18 no-preload-599042 kubelet[1437]: E0815 18:57:18.851511    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-djv7r" podUID="3d03d5bc-31ed-4a75-8d75-627d40a2d8fc"
	Aug 15 18:57:23 no-preload-599042 kubelet[1437]: E0815 18:57:23.121724    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748243121198739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:23 no-preload-599042 kubelet[1437]: E0815 18:57:23.122062    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748243121198739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:30 no-preload-599042 kubelet[1437]: E0815 18:57:30.851880    1437 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-djv7r" podUID="3d03d5bc-31ed-4a75-8d75-627d40a2d8fc"
	Aug 15 18:57:33 no-preload-599042 kubelet[1437]: E0815 18:57:33.124094    1437 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748253123510311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Aug 15 18:57:33 no-preload-599042 kubelet[1437]: E0815 18:57:33.124126    1437 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748253123510311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:100688,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] <==
	I0815 18:38:19.161510       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 18:38:19.173233       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 18:38:19.173340       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 18:38:19.181838       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 18:38:19.181993       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-599042_17dee5fe-21a1-403e-b470-19ab99791054!
	I0815 18:38:19.185118       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"878577f0-7b6e-4dac-8c6f-ccfc640f6556", APIVersion:"v1", ResourceVersion:"603", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-599042_17dee5fe-21a1-403e-b470-19ab99791054 became leader
	I0815 18:38:19.282471       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-599042_17dee5fe-21a1-403e-b470-19ab99791054!
	
	
	==> storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] <==
	I0815 18:37:48.460152       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0815 18:38:18.462790       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-599042 -n no-preload-599042
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-599042 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-djv7r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-599042 describe pod metrics-server-6867b74b74-djv7r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-599042 describe pod metrics-server-6867b74b74-djv7r: exit status 1 (77.647171ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-djv7r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-599042 describe pod metrics-server-6867b74b74-djv7r: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (378.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (100.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
E0815 18:54:52.218331   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
E0815 18:55:50.806211   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.89:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.89:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-278865 -n old-k8s-version-278865
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-278865 -n old-k8s-version-278865: exit status 2 (250.840961ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-278865" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-278865 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-278865 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.817µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-278865 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865: exit status 2 (222.972973ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-278865 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-278865 logs -n 25: (1.549392056s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-729203                           | kubernetes-upgrade-729203    | jenkins | v1.33.1 | 15 Aug 24 18:26 UTC | 15 Aug 24 18:27 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-498665                              | stopped-upgrade-498665       | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:27 UTC |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:28 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-729203                           | kubernetes-upgrade-729203    | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:27 UTC |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:27 UTC | 15 Aug 24 18:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-003860                              | cert-expiration-003860       | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-003860                              | cert-expiration-003860       | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	| delete  | -p                                                     | disable-driver-mounts-698209 | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | disable-driver-mounts-698209                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:29 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-599042             | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC | 15 Aug 24 18:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:28 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-555028            | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-423062  | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC | 15 Aug 24 18:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:29 UTC |                     |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-278865        | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:30 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-599042                  | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-599042                                   | no-preload-599042            | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC | 15 Aug 24 18:42 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-555028                 | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:31 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-555028                                  | embed-certs-555028           | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-423062       | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-423062 | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:41 UTC |
	|         | default-k8s-diff-port-423062                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-278865             | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC | 15 Aug 24 18:32 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-278865                              | old-k8s-version-278865       | jenkins | v1.33.1 | 15 Aug 24 18:32 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 18:32:52
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 18:32:52.788403   68713 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:32:52.788704   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788715   68713 out.go:358] Setting ErrFile to fd 2...
	I0815 18:32:52.788719   68713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:32:52.788916   68713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:32:52.789431   68713 out.go:352] Setting JSON to false
	I0815 18:32:52.790297   68713 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8119,"bootTime":1723738654,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:32:52.790355   68713 start.go:139] virtualization: kvm guest
	I0815 18:32:52.792478   68713 out.go:177] * [old-k8s-version-278865] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:32:52.793818   68713 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:32:52.793864   68713 notify.go:220] Checking for updates...
	I0815 18:32:52.796618   68713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:32:52.797914   68713 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:32:52.799054   68713 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:32:52.800337   68713 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:32:52.801448   68713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:32:52.803087   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:32:52.803465   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.803521   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.819013   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I0815 18:32:52.819447   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.819966   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.819985   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.820284   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.820482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.822582   68713 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 18:32:52.824024   68713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:32:52.824380   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:32:52.824425   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:32:52.839486   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0815 18:32:52.839905   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:32:52.840345   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:32:52.840367   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:32:52.840730   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:32:52.840904   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:32:52.876811   68713 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 18:32:52.878075   68713 start.go:297] selected driver: kvm2
	I0815 18:32:52.878098   68713 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.878240   68713 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:32:52.878920   68713 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.879001   68713 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 18:32:52.894158   68713 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 18:32:52.894895   68713 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:32:52.894953   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:32:52.894969   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:32:52.895020   68713 start.go:340] cluster config:
	{Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:32:52.895203   68713 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 18:32:52.897304   68713 out.go:177] * Starting "old-k8s-version-278865" primary control-plane node in "old-k8s-version-278865" cluster
	I0815 18:32:51.348753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:32:52.898737   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:32:52.898785   68713 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 18:32:52.898795   68713 cache.go:56] Caching tarball of preloaded images
	I0815 18:32:52.898861   68713 preload.go:172] Found /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0815 18:32:52.898871   68713 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0815 18:32:52.898962   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:32:52.899159   68713 start.go:360] acquireMachinesLock for old-k8s-version-278865: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:32:57.424754   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:00.496786   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:06.576768   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:09.648759   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:15.728760   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:18.800783   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:24.880725   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:27.952781   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:34.032763   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:37.104737   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:43.184796   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:46.260701   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:52.336771   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:33:55.408745   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:01.488742   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:04.560759   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:10.640771   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:13.712753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:19.792795   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:22.864720   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:28.944769   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:32.016745   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:38.096783   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:41.168739   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:47.248802   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:50.320778   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:56.400717   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:34:59.472780   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:05.552762   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:08.624707   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:14.704753   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:17.776748   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:23.856782   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:26.932742   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:33.008795   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:36.080807   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:42.160767   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:45.232800   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:51.312780   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:35:54.384719   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:00.464740   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:03.536736   67936 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.14:22: connect: no route to host
	I0815 18:36:06.540805   68248 start.go:364] duration metric: took 4m1.610543673s to acquireMachinesLock for "embed-certs-555028"
	I0815 18:36:06.540869   68248 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:06.540881   68248 fix.go:54] fixHost starting: 
	I0815 18:36:06.541241   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:06.541272   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:06.556680   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0815 18:36:06.557105   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:06.557518   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:36:06.557540   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:06.557831   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:06.558059   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:06.558202   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:36:06.559702   68248 fix.go:112] recreateIfNeeded on embed-certs-555028: state=Stopped err=<nil>
	I0815 18:36:06.559724   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	W0815 18:36:06.559877   68248 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:06.561410   68248 out.go:177] * Restarting existing kvm2 VM for "embed-certs-555028" ...
	I0815 18:36:06.538474   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:06.538508   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:36:06.538805   67936 buildroot.go:166] provisioning hostname "no-preload-599042"
	I0815 18:36:06.538831   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:36:06.539016   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:36:06.540664   67936 machine.go:96] duration metric: took 4m37.431349663s to provisionDockerMachine
	I0815 18:36:06.540702   67936 fix.go:56] duration metric: took 4m37.452150687s for fixHost
	I0815 18:36:06.540709   67936 start.go:83] releasing machines lock for "no-preload-599042", held for 4m37.452172562s
	W0815 18:36:06.540732   67936 start.go:714] error starting host: provision: host is not running
	W0815 18:36:06.540801   67936 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0815 18:36:06.540809   67936 start.go:729] Will try again in 5 seconds ...
	I0815 18:36:06.562384   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Start
	I0815 18:36:06.562537   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring networks are active...
	I0815 18:36:06.563252   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring network default is active
	I0815 18:36:06.563554   68248 main.go:141] libmachine: (embed-certs-555028) Ensuring network mk-embed-certs-555028 is active
	I0815 18:36:06.563908   68248 main.go:141] libmachine: (embed-certs-555028) Getting domain xml...
	I0815 18:36:06.564614   68248 main.go:141] libmachine: (embed-certs-555028) Creating domain...
	I0815 18:36:07.763793   68248 main.go:141] libmachine: (embed-certs-555028) Waiting to get IP...
	I0815 18:36:07.764733   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:07.765099   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:07.765200   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:07.765085   69393 retry.go:31] will retry after 206.840107ms: waiting for machine to come up
	I0815 18:36:07.973596   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:07.974069   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:07.974093   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:07.974019   69393 retry.go:31] will retry after 319.002956ms: waiting for machine to come up
	I0815 18:36:08.294670   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:08.295125   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:08.295154   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:08.295073   69393 retry.go:31] will retry after 425.99373ms: waiting for machine to come up
	I0815 18:36:08.722549   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:08.722954   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:08.722985   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:08.722903   69393 retry.go:31] will retry after 428.077891ms: waiting for machine to come up
	I0815 18:36:09.152674   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:09.153155   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:09.153187   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:09.153108   69393 retry.go:31] will retry after 476.041155ms: waiting for machine to come up
	I0815 18:36:09.630963   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:09.631456   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:09.631485   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:09.631395   69393 retry.go:31] will retry after 751.179582ms: waiting for machine to come up
	I0815 18:36:11.542364   67936 start.go:360] acquireMachinesLock for no-preload-599042: {Name:mk5c94326054b6faebe87b43653bce73979385d8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0815 18:36:10.384466   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:10.384888   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:10.384916   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:10.384842   69393 retry.go:31] will retry after 1.028202731s: waiting for machine to come up
	I0815 18:36:11.414905   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:11.415343   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:11.415373   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:11.415283   69393 retry.go:31] will retry after 1.129105535s: waiting for machine to come up
	I0815 18:36:12.545941   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:12.546365   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:12.546387   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:12.546320   69393 retry.go:31] will retry after 1.734323812s: waiting for machine to come up
	I0815 18:36:14.283247   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:14.283622   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:14.283653   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:14.283569   69393 retry.go:31] will retry after 1.657173562s: waiting for machine to come up
	I0815 18:36:15.943329   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:15.943730   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:15.943760   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:15.943669   69393 retry.go:31] will retry after 2.349664822s: waiting for machine to come up
	I0815 18:36:18.295797   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:18.296330   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:18.296363   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:18.296264   69393 retry.go:31] will retry after 2.889119284s: waiting for machine to come up
	I0815 18:36:21.186597   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:21.186983   68248 main.go:141] libmachine: (embed-certs-555028) DBG | unable to find current IP address of domain embed-certs-555028 in network mk-embed-certs-555028
	I0815 18:36:21.187004   68248 main.go:141] libmachine: (embed-certs-555028) DBG | I0815 18:36:21.186945   69393 retry.go:31] will retry after 2.79101595s: waiting for machine to come up
	I0815 18:36:23.981271   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.981732   68248 main.go:141] libmachine: (embed-certs-555028) Found IP for machine: 192.168.50.234
	I0815 18:36:23.981761   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has current primary IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.981770   68248 main.go:141] libmachine: (embed-certs-555028) Reserving static IP address...
	I0815 18:36:23.982166   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "embed-certs-555028", mac: "52:54:00:5c:59:7b", ip: "192.168.50.234"} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:23.982189   68248 main.go:141] libmachine: (embed-certs-555028) DBG | skip adding static IP to network mk-embed-certs-555028 - found existing host DHCP lease matching {name: "embed-certs-555028", mac: "52:54:00:5c:59:7b", ip: "192.168.50.234"}
	I0815 18:36:23.982200   68248 main.go:141] libmachine: (embed-certs-555028) Reserved static IP address: 192.168.50.234
	I0815 18:36:23.982210   68248 main.go:141] libmachine: (embed-certs-555028) Waiting for SSH to be available...
	I0815 18:36:23.982220   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Getting to WaitForSSH function...
	I0815 18:36:23.984253   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.984578   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:23.984601   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:23.984696   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Using SSH client type: external
	I0815 18:36:23.984720   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa (-rw-------)
	I0815 18:36:23.984752   68248 main.go:141] libmachine: (embed-certs-555028) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.234 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:36:23.984763   68248 main.go:141] libmachine: (embed-certs-555028) DBG | About to run SSH command:
	I0815 18:36:23.984772   68248 main.go:141] libmachine: (embed-certs-555028) DBG | exit 0
	I0815 18:36:24.104618   68248 main.go:141] libmachine: (embed-certs-555028) DBG | SSH cmd err, output: <nil>: 
	I0815 18:36:24.105023   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetConfigRaw
	I0815 18:36:24.105694   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:24.108191   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.108532   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.108568   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.108844   68248 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/config.json ...
	I0815 18:36:24.109037   68248 machine.go:93] provisionDockerMachine start ...
	I0815 18:36:24.109055   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:24.109249   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.111363   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.111680   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.111725   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.111821   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.111989   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.112141   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.112277   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.112454   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.112662   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.112673   68248 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:36:24.208951   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:36:24.208986   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.209255   68248 buildroot.go:166] provisioning hostname "embed-certs-555028"
	I0815 18:36:24.209285   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.209514   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.212393   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.212850   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.212878   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.213010   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.213198   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.213340   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.213466   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.213663   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.213821   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.213832   68248 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-555028 && echo "embed-certs-555028" | sudo tee /etc/hostname
	I0815 18:36:24.327157   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-555028
	
	I0815 18:36:24.327191   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.330193   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.330577   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.330607   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.330824   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.331029   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.331174   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.331325   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.331508   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.331713   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.331732   68248 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-555028' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-555028/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-555028' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:36:24.437909   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:24.437938   68248 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:36:24.437977   68248 buildroot.go:174] setting up certificates
	I0815 18:36:24.437987   68248 provision.go:84] configureAuth start
	I0815 18:36:24.437996   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetMachineName
	I0815 18:36:24.438264   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:24.440637   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.440961   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.440993   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.441089   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.443071   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.443415   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.443448   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.443562   68248 provision.go:143] copyHostCerts
	I0815 18:36:24.443622   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:36:24.443643   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:36:24.443726   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:36:24.443843   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:36:24.443855   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:36:24.443893   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:36:24.443968   68248 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:36:24.443977   68248 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:36:24.444007   68248 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:36:24.444074   68248 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.embed-certs-555028 san=[127.0.0.1 192.168.50.234 embed-certs-555028 localhost minikube]
	I0815 18:36:24.507119   68248 provision.go:177] copyRemoteCerts
	I0815 18:36:24.507177   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:36:24.507202   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.509835   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.510230   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.510260   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.510403   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.510606   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.510735   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.510842   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:24.590623   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:36:24.615635   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 18:36:24.643400   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:36:24.670394   68248 provision.go:87] duration metric: took 232.396705ms to configureAuth
	I0815 18:36:24.670415   68248 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:36:24.670609   68248 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:24.670694   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.673303   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.673685   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.673721   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.673863   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.674050   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.674222   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.674354   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.674513   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:24.674673   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:24.674688   68248 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:36:25.149223   68429 start.go:364] duration metric: took 3m59.233021018s to acquireMachinesLock for "default-k8s-diff-port-423062"
	I0815 18:36:25.149295   68429 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:25.149306   68429 fix.go:54] fixHost starting: 
	I0815 18:36:25.149757   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:25.149799   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:25.166940   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41811
	I0815 18:36:25.167342   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:25.167882   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:25.167910   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:25.168179   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:25.168383   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:25.168553   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:25.170072   68429 fix.go:112] recreateIfNeeded on default-k8s-diff-port-423062: state=Stopped err=<nil>
	I0815 18:36:25.170106   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	W0815 18:36:25.170263   68429 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:25.172091   68429 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-423062" ...
	I0815 18:36:25.173641   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Start
	I0815 18:36:25.173831   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring networks are active...
	I0815 18:36:25.174594   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring network default is active
	I0815 18:36:25.174981   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Ensuring network mk-default-k8s-diff-port-423062 is active
	I0815 18:36:25.175410   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Getting domain xml...
	I0815 18:36:25.176275   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Creating domain...
	I0815 18:36:24.928110   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:36:24.928140   68248 machine.go:96] duration metric: took 819.089931ms to provisionDockerMachine
	I0815 18:36:24.928156   68248 start.go:293] postStartSetup for "embed-certs-555028" (driver="kvm2")
	I0815 18:36:24.928170   68248 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:36:24.928190   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:24.928513   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:36:24.928542   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:24.931301   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.931756   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:24.931799   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:24.931846   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:24.932028   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:24.932311   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:24.932477   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.011373   68248 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:36:25.015677   68248 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:36:25.015707   68248 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:36:25.015798   68248 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:36:25.015900   68248 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:36:25.016014   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:36:25.025465   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:25.049662   68248 start.go:296] duration metric: took 121.491742ms for postStartSetup
	I0815 18:36:25.049704   68248 fix.go:56] duration metric: took 18.508823511s for fixHost
	I0815 18:36:25.049728   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.052184   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.052538   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.052583   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.052718   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.052904   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.053099   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.053271   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.053438   68248 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:25.053604   68248 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.50.234 22 <nil> <nil>}
	I0815 18:36:25.053614   68248 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:36:25.149075   68248 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723746985.122186042
	
	I0815 18:36:25.149095   68248 fix.go:216] guest clock: 1723746985.122186042
	I0815 18:36:25.149103   68248 fix.go:229] Guest: 2024-08-15 18:36:25.122186042 +0000 UTC Remote: 2024-08-15 18:36:25.049708543 +0000 UTC m=+260.258232753 (delta=72.477499ms)
	I0815 18:36:25.149131   68248 fix.go:200] guest clock delta is within tolerance: 72.477499ms
	I0815 18:36:25.149135   68248 start.go:83] releasing machines lock for "embed-certs-555028", held for 18.608287436s
	I0815 18:36:25.149158   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.149408   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:25.152125   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.152542   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.152568   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.152742   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153260   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153439   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:36:25.153539   68248 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:36:25.153587   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.153639   68248 ssh_runner.go:195] Run: cat /version.json
	I0815 18:36:25.153659   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:36:25.156311   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156504   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156740   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.156769   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.156847   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:25.156883   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:25.157040   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.157122   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:36:25.157303   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.157318   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:36:25.157473   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.157479   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:36:25.157647   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.157647   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:36:25.233725   68248 ssh_runner.go:195] Run: systemctl --version
	I0815 18:36:25.253737   68248 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:36:25.402047   68248 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:36:25.409250   68248 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:36:25.409328   68248 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:36:25.426491   68248 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:36:25.426514   68248 start.go:495] detecting cgroup driver to use...
	I0815 18:36:25.426580   68248 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:36:25.445177   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:36:25.459432   68248 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:36:25.459512   68248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:36:25.473777   68248 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:36:25.488144   68248 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:36:25.627700   68248 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:36:25.791278   68248 docker.go:233] disabling docker service ...
	I0815 18:36:25.791349   68248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:36:25.810146   68248 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:36:25.825131   68248 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:36:25.975457   68248 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:36:26.106757   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:36:26.123053   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:36:26.142739   68248 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:36:26.142804   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.153163   68248 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:36:26.153217   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.163863   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.175028   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.192480   68248 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:36:26.208933   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.219825   68248 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.245623   68248 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:26.256645   68248 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:36:26.265947   68248 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:36:26.266004   68248 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:36:26.278665   68248 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:36:26.289519   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:26.423656   68248 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:36:26.560919   68248 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:36:26.560996   68248 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:36:26.565696   68248 start.go:563] Will wait 60s for crictl version
	I0815 18:36:26.565764   68248 ssh_runner.go:195] Run: which crictl
	I0815 18:36:26.569498   68248 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:36:26.609872   68248 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:36:26.609948   68248 ssh_runner.go:195] Run: crio --version
	I0815 18:36:26.645300   68248 ssh_runner.go:195] Run: crio --version
	I0815 18:36:26.681229   68248 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:36:26.682461   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetIP
	I0815 18:36:26.685495   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:26.686011   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:36:26.686037   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:36:26.686323   68248 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0815 18:36:26.690590   68248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:26.703512   68248 kubeadm.go:883] updating cluster {Name:embed-certs-555028 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:36:26.703679   68248 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:36:26.703748   68248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:26.740601   68248 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:36:26.740679   68248 ssh_runner.go:195] Run: which lz4
	I0815 18:36:26.744798   68248 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:36:26.748894   68248 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:36:26.748921   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:36:28.188174   68248 crio.go:462] duration metric: took 1.443420751s to copy over tarball
	I0815 18:36:28.188254   68248 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:36:26.428013   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting to get IP...
	I0815 18:36:26.428929   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.429397   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.429513   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.429391   69513 retry.go:31] will retry after 296.45967ms: waiting for machine to come up
	I0815 18:36:26.727871   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.728273   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.728298   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.728237   69513 retry.go:31] will retry after 258.379179ms: waiting for machine to come up
	I0815 18:36:26.988915   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.989398   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:26.989472   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:26.989374   69513 retry.go:31] will retry after 418.611169ms: waiting for machine to come up
	I0815 18:36:27.409905   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.410358   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.410398   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:27.410327   69513 retry.go:31] will retry after 566.642237ms: waiting for machine to come up
	I0815 18:36:27.978717   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.979183   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:27.979215   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:27.979125   69513 retry.go:31] will retry after 740.292473ms: waiting for machine to come up
	I0815 18:36:28.720587   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:28.720970   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:28.721008   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:28.720941   69513 retry.go:31] will retry after 610.435484ms: waiting for machine to come up
	I0815 18:36:29.333342   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:29.333696   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:29.333731   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:29.333632   69513 retry.go:31] will retry after 1.059086771s: waiting for machine to come up
	I0815 18:36:30.394125   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:30.394560   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:30.394589   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:30.394519   69513 retry.go:31] will retry after 1.279753887s: waiting for machine to come up
	I0815 18:36:30.309340   68248 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.121056035s)
	I0815 18:36:30.309382   68248 crio.go:469] duration metric: took 2.121176349s to extract the tarball
	I0815 18:36:30.309394   68248 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:36:30.346520   68248 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:30.394771   68248 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:36:30.394789   68248 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:36:30.394799   68248 kubeadm.go:934] updating node { 192.168.50.234 8443 v1.31.0 crio true true} ...
	I0815 18:36:30.394951   68248 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-555028 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.234
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:36:30.395033   68248 ssh_runner.go:195] Run: crio config
	I0815 18:36:30.439636   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:36:30.439663   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:30.439678   68248 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:36:30.439707   68248 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.234 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-555028 NodeName:embed-certs-555028 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.234"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.234 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:36:30.439899   68248 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.234
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-555028"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.234
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.234"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:36:30.439976   68248 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:36:30.449774   68248 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:36:30.449842   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:36:30.458892   68248 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0815 18:36:30.475171   68248 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:36:30.490942   68248 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0815 18:36:30.507498   68248 ssh_runner.go:195] Run: grep 192.168.50.234	control-plane.minikube.internal$ /etc/hosts
	I0815 18:36:30.511254   68248 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.234	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:30.522772   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:30.646060   68248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:30.667948   68248 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028 for IP: 192.168.50.234
	I0815 18:36:30.667974   68248 certs.go:194] generating shared ca certs ...
	I0815 18:36:30.667994   68248 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:30.668178   68248 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:36:30.668231   68248 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:36:30.668244   68248 certs.go:256] generating profile certs ...
	I0815 18:36:30.668360   68248 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/client.key
	I0815 18:36:30.668442   68248 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.key.539203f3
	I0815 18:36:30.668524   68248 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.key
	I0815 18:36:30.668686   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:36:30.668725   68248 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:36:30.668737   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:36:30.668774   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:36:30.668807   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:36:30.668836   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:36:30.668941   68248 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:30.669810   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:36:30.721245   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:36:30.753016   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:36:30.782005   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:36:30.815008   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 18:36:30.847615   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:36:30.871566   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:36:30.894778   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/embed-certs-555028/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:36:30.919167   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:36:30.942597   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:36:30.965395   68248 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:36:30.988959   68248 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:36:31.005578   68248 ssh_runner.go:195] Run: openssl version
	I0815 18:36:31.011697   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:36:31.022496   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.027102   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.027154   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:36:31.033475   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:36:31.044793   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:36:31.055793   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.060642   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.060692   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:36:31.066544   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:36:31.077637   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:36:31.088468   68248 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.093295   68248 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.093347   68248 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:31.098908   68248 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:36:31.109856   68248 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:36:31.114519   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:36:31.120709   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:36:31.126754   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:36:31.132917   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:36:31.138739   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:36:31.144785   68248 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:36:31.150604   68248 kubeadm.go:392] StartCluster: {Name:embed-certs-555028 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:embed-certs-555028 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:36:31.150702   68248 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:36:31.150755   68248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:31.192152   68248 cri.go:89] found id: ""
	I0815 18:36:31.192253   68248 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:36:31.203076   68248 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:36:31.203099   68248 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:36:31.203151   68248 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:36:31.213659   68248 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:36:31.215070   68248 kubeconfig.go:125] found "embed-certs-555028" server: "https://192.168.50.234:8443"
	I0815 18:36:31.218243   68248 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:36:31.228210   68248 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.234
	I0815 18:36:31.228245   68248 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:36:31.228267   68248 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:36:31.228317   68248 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:31.275944   68248 cri.go:89] found id: ""
	I0815 18:36:31.276031   68248 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:36:31.294466   68248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:36:31.307241   68248 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:36:31.307276   68248 kubeadm.go:157] found existing configuration files:
	
	I0815 18:36:31.307327   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:36:31.316654   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:36:31.316722   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:36:31.326475   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:36:31.335726   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:36:31.335796   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:36:31.345063   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:36:31.353576   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:36:31.353628   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:36:31.362449   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:36:31.370717   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:36:31.370792   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:36:31.379827   68248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:36:31.389001   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:31.510611   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.158537   68248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.647891555s)
	I0815 18:36:33.158574   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.376600   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.459742   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:33.545503   68248 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:36:33.545595   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.046191   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.546256   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:34.571236   68248 api_server.go:72] duration metric: took 1.025744612s to wait for apiserver process to appear ...
	I0815 18:36:34.571275   68248 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:36:34.571297   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:31.675513   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:31.676013   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:31.676042   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:31.675960   69513 retry.go:31] will retry after 1.669099573s: waiting for machine to come up
	I0815 18:36:33.348089   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:33.348611   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:33.348639   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:33.348575   69513 retry.go:31] will retry after 1.613394267s: waiting for machine to come up
	I0815 18:36:34.963674   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:34.964187   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:34.964215   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:34.964146   69513 retry.go:31] will retry after 2.128578928s: waiting for machine to come up
	I0815 18:36:37.262138   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:37.262168   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:37.262184   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:37.310539   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:37.310569   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:37.571713   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:37.590002   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:37.590062   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:38.071526   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:38.076179   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:38.076212   68248 api_server.go:103] status: https://192.168.50.234:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:38.571714   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:36:38.576518   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I0815 18:36:38.582358   68248 api_server.go:141] control plane version: v1.31.0
	I0815 18:36:38.582381   68248 api_server.go:131] duration metric: took 4.011097638s to wait for apiserver health ...
	I0815 18:36:38.582393   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:36:38.582401   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:38.584203   68248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:36:38.585513   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:36:38.604350   68248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:36:38.645538   68248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:36:38.653445   68248 system_pods.go:59] 8 kube-system pods found
	I0815 18:36:38.653476   68248 system_pods.go:61] "coredns-6f6b679f8f-sjx7c" [93a037b9-1e7c-471a-b62d-d7898b2b8287] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:36:38.653486   68248 system_pods.go:61] "etcd-embed-certs-555028" [7e526b10-7acd-4d25-9847-8e11e21ba8c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:36:38.653495   68248 system_pods.go:61] "kube-apiserver-embed-certs-555028" [3f317b0f-15a1-4e7d-8ca5-3cdf70dee711] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:36:38.653501   68248 system_pods.go:61] "kube-controller-manager-embed-certs-555028" [431113cd-bce9-4ecb-8233-c5463875f1b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:36:38.653506   68248 system_pods.go:61] "kube-proxy-dzwt7" [a8101c7e-c010-45a3-8746-0dc20c7ef0e2] Running
	I0815 18:36:38.653513   68248 system_pods.go:61] "kube-scheduler-embed-certs-555028" [84a5d051-d8c1-4097-b92c-e2f0d7a03385] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:36:38.653520   68248 system_pods.go:61] "metrics-server-6867b74b74-wp5rn" [222160bf-6774-49a5-9f30-7582748c8a82] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:36:38.653534   68248 system_pods.go:61] "storage-provisioner" [e88c8785-2d8b-47b6-850f-e6cda74a4f5a] Running
	I0815 18:36:38.653549   68248 system_pods.go:74] duration metric: took 7.98765ms to wait for pod list to return data ...
	I0815 18:36:38.653558   68248 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:36:38.656864   68248 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:36:38.656893   68248 node_conditions.go:123] node cpu capacity is 2
	I0815 18:36:38.656906   68248 node_conditions.go:105] duration metric: took 3.340245ms to run NodePressure ...
	I0815 18:36:38.656923   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:38.918518   68248 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:36:38.923148   68248 kubeadm.go:739] kubelet initialised
	I0815 18:36:38.923168   68248 kubeadm.go:740] duration metric: took 4.62305ms waiting for restarted kubelet to initialise ...
	I0815 18:36:38.923177   68248 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:38.927933   68248 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.934928   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.934953   68248 pod_ready.go:82] duration metric: took 6.994953ms for pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.934965   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "coredns-6f6b679f8f-sjx7c" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.934974   68248 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.939533   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "etcd-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.939558   68248 pod_ready.go:82] duration metric: took 4.573835ms for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.939568   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "etcd-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.939575   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:38.943567   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.943590   68248 pod_ready.go:82] duration metric: took 4.004869ms for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:38.943601   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:38.943608   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.049176   68248 pod_ready.go:98] node "embed-certs-555028" hosting pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:39.049203   68248 pod_ready.go:82] duration metric: took 105.585473ms for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:39.049212   68248 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-555028" hosting pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-555028" has status "Ready":"False"
	I0815 18:36:39.049219   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-dzwt7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.449514   68248 pod_ready.go:93] pod "kube-proxy-dzwt7" in "kube-system" namespace has status "Ready":"True"
	I0815 18:36:39.449539   68248 pod_ready.go:82] duration metric: took 400.311062ms for pod "kube-proxy-dzwt7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:39.449548   68248 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:37.094139   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:37.094640   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:37.094670   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:37.094583   69513 retry.go:31] will retry after 2.268267509s: waiting for machine to come up
	I0815 18:36:39.365595   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:39.365975   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | unable to find current IP address of domain default-k8s-diff-port-423062 in network mk-default-k8s-diff-port-423062
	I0815 18:36:39.366007   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | I0815 18:36:39.365938   69513 retry.go:31] will retry after 3.286154075s: waiting for machine to come up
	I0815 18:36:44.301710   68713 start.go:364] duration metric: took 3m51.402501772s to acquireMachinesLock for "old-k8s-version-278865"
	I0815 18:36:44.301771   68713 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:36:44.301792   68713 fix.go:54] fixHost starting: 
	I0815 18:36:44.302227   68713 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:44.302265   68713 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:44.319819   68713 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38539
	I0815 18:36:44.320335   68713 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:44.320975   68713 main.go:141] libmachine: Using API Version  1
	I0815 18:36:44.321003   68713 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:44.321380   68713 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:44.321572   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:36:44.321720   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetState
	I0815 18:36:44.323551   68713 fix.go:112] recreateIfNeeded on old-k8s-version-278865: state=Stopped err=<nil>
	I0815 18:36:44.323586   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	W0815 18:36:44.323748   68713 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:36:44.325761   68713 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-278865" ...
	I0815 18:36:41.456648   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:43.456917   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:42.653801   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.654221   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has current primary IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.654251   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Found IP for machine: 192.168.61.7
	I0815 18:36:42.654268   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Reserving static IP address...
	I0815 18:36:42.654714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-423062", mac: "52:54:00:83:9a:f2", ip: "192.168.61.7"} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.654759   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | skip adding static IP to network mk-default-k8s-diff-port-423062 - found existing host DHCP lease matching {name: "default-k8s-diff-port-423062", mac: "52:54:00:83:9a:f2", ip: "192.168.61.7"}
	I0815 18:36:42.654778   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Reserved static IP address: 192.168.61.7
	I0815 18:36:42.654798   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Waiting for SSH to be available...
	I0815 18:36:42.654815   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Getting to WaitForSSH function...
	I0815 18:36:42.657618   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.657968   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.657996   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.658093   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Using SSH client type: external
	I0815 18:36:42.658115   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa (-rw-------)
	I0815 18:36:42.658200   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:36:42.658223   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | About to run SSH command:
	I0815 18:36:42.658234   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | exit 0
	I0815 18:36:42.780714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | SSH cmd err, output: <nil>: 
	I0815 18:36:42.781095   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetConfigRaw
	I0815 18:36:42.781734   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:42.784384   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.784820   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.784853   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.785137   68429 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/config.json ...
	I0815 18:36:42.785364   68429 machine.go:93] provisionDockerMachine start ...
	I0815 18:36:42.785390   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:42.785599   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:42.788083   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.788436   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.788465   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.788655   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:42.788833   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.789006   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.789145   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:42.789301   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:42.789607   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:42.789625   68429 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:36:42.889002   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:36:42.889031   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:42.889237   68429 buildroot.go:166] provisioning hostname "default-k8s-diff-port-423062"
	I0815 18:36:42.889260   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:42.889434   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:42.892072   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.892422   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:42.892445   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:42.892645   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:42.892846   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.892995   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:42.893148   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:42.893286   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:42.893490   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:42.893505   68429 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-423062 && echo "default-k8s-diff-port-423062" | sudo tee /etc/hostname
	I0815 18:36:43.008310   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-423062
	
	I0815 18:36:43.008347   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.011091   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.011446   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.011472   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.011653   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.011864   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.012027   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.012159   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.012321   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:43.012518   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:43.012537   68429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-423062' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-423062/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-423062' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:36:43.121095   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:36:43.121123   68429 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:36:43.121157   68429 buildroot.go:174] setting up certificates
	I0815 18:36:43.121172   68429 provision.go:84] configureAuth start
	I0815 18:36:43.121186   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetMachineName
	I0815 18:36:43.121510   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:43.123863   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.124178   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.124200   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.124312   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.126385   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.126633   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.126667   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.126784   68429 provision.go:143] copyHostCerts
	I0815 18:36:43.126861   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:36:43.126884   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:36:43.126944   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:36:43.127052   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:36:43.127062   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:36:43.127090   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:36:43.127177   68429 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:36:43.127187   68429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:36:43.127215   68429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:36:43.127286   68429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-423062 san=[127.0.0.1 192.168.61.7 default-k8s-diff-port-423062 localhost minikube]
	I0815 18:36:43.627396   68429 provision.go:177] copyRemoteCerts
	I0815 18:36:43.627460   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:36:43.627485   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.630025   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.630311   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.630340   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.630479   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.630670   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.630850   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.630976   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:43.712571   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:36:43.738904   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0815 18:36:43.764328   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 18:36:43.787211   68429 provision.go:87] duration metric: took 666.026026ms to configureAuth
	I0815 18:36:43.787241   68429 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:36:43.787467   68429 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:43.787567   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:43.789803   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.790210   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:43.790232   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:43.790432   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:43.790604   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.790729   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:43.790905   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:43.791021   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:43.791169   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:43.791187   68429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:36:44.067277   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:36:44.067307   68429 machine.go:96] duration metric: took 1.281926749s to provisionDockerMachine
	I0815 18:36:44.067322   68429 start.go:293] postStartSetup for "default-k8s-diff-port-423062" (driver="kvm2")
	I0815 18:36:44.067335   68429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:36:44.067360   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.067711   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:36:44.067749   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.070224   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.070543   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.070573   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.070740   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.070925   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.071079   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.071269   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.152784   68429 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:36:44.157264   68429 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:36:44.157291   68429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:36:44.157364   68429 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:36:44.157461   68429 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:36:44.157580   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:36:44.168520   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:44.195223   68429 start.go:296] duration metric: took 127.886016ms for postStartSetup
	I0815 18:36:44.195268   68429 fix.go:56] duration metric: took 19.045962302s for fixHost
	I0815 18:36:44.195292   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.197711   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.198065   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.198090   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.198281   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.198438   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.198614   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.198768   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.198959   68429 main.go:141] libmachine: Using SSH client type: native
	I0815 18:36:44.199154   68429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.61.7 22 <nil> <nil>}
	I0815 18:36:44.199172   68429 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:36:44.301519   68429 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747004.273982003
	
	I0815 18:36:44.301543   68429 fix.go:216] guest clock: 1723747004.273982003
	I0815 18:36:44.301553   68429 fix.go:229] Guest: 2024-08-15 18:36:44.273982003 +0000 UTC Remote: 2024-08-15 18:36:44.195273929 +0000 UTC m=+258.412094909 (delta=78.708074ms)
	I0815 18:36:44.301598   68429 fix.go:200] guest clock delta is within tolerance: 78.708074ms
	I0815 18:36:44.301606   68429 start.go:83] releasing machines lock for "default-k8s-diff-port-423062", held for 19.152336719s
	I0815 18:36:44.301638   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.301903   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:44.305012   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.305498   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.305524   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.305742   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306240   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306425   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:44.306533   68429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:36:44.306595   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.306689   68429 ssh_runner.go:195] Run: cat /version.json
	I0815 18:36:44.306714   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:44.309649   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.309838   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310098   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.310133   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310250   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.310267   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:44.310296   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:44.310434   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.310457   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:44.310634   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:44.310654   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.310794   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:44.310798   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.310947   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:44.412125   68429 ssh_runner.go:195] Run: systemctl --version
	I0815 18:36:44.420070   68429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:36:44.566014   68429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:36:44.572209   68429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:36:44.572283   68429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:36:44.593041   68429 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:36:44.593067   68429 start.go:495] detecting cgroup driver to use...
	I0815 18:36:44.593145   68429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:36:44.613683   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:36:44.627766   68429 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:36:44.627851   68429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:36:44.641172   68429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:36:44.654952   68429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:36:44.778684   68429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:36:44.965548   68429 docker.go:233] disabling docker service ...
	I0815 18:36:44.965631   68429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:36:44.983153   68429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:36:44.999109   68429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:36:45.131097   68429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:36:45.270930   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:36:45.287846   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:36:45.309345   68429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:36:45.309407   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.320032   68429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:36:45.320092   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.331647   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.342499   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.353192   68429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:36:45.364163   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.381124   68429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.403692   68429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:36:45.415032   68429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:36:45.424798   68429 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:36:45.424859   68429 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:36:45.439077   68429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:36:45.448554   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:45.570697   68429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:36:45.719575   68429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:36:45.719655   68429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:36:45.724415   68429 start.go:563] Will wait 60s for crictl version
	I0815 18:36:45.724476   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:36:45.728443   68429 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:36:45.770935   68429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:36:45.771023   68429 ssh_runner.go:195] Run: crio --version
	I0815 18:36:45.799588   68429 ssh_runner.go:195] Run: crio --version
	I0815 18:36:45.830915   68429 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:36:44.327259   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .Start
	I0815 18:36:44.327431   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring networks are active...
	I0815 18:36:44.328116   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network default is active
	I0815 18:36:44.328601   68713 main.go:141] libmachine: (old-k8s-version-278865) Ensuring network mk-old-k8s-version-278865 is active
	I0815 18:36:44.329081   68713 main.go:141] libmachine: (old-k8s-version-278865) Getting domain xml...
	I0815 18:36:44.331888   68713 main.go:141] libmachine: (old-k8s-version-278865) Creating domain...
	I0815 18:36:45.633882   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting to get IP...
	I0815 18:36:45.634842   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.635216   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.635286   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.635206   69670 retry.go:31] will retry after 300.377534ms: waiting for machine to come up
	I0815 18:36:45.937793   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:45.938290   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:45.938312   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:45.938236   69670 retry.go:31] will retry after 282.311084ms: waiting for machine to come up
	I0815 18:36:46.222856   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.223327   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.223350   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.223283   69670 retry.go:31] will retry after 354.299649ms: waiting for machine to come up
	I0815 18:36:46.578770   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.579337   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.579360   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.579241   69670 retry.go:31] will retry after 382.947645ms: waiting for machine to come up
	I0815 18:36:46.964003   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:46.964911   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:46.964943   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:46.964824   69670 retry.go:31] will retry after 710.757442ms: waiting for machine to come up
	I0815 18:36:47.676738   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:47.677422   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:47.677450   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:47.677360   69670 retry.go:31] will retry after 588.944709ms: waiting for machine to come up
	I0815 18:36:45.957776   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:48.456345   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:45.832411   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetIP
	I0815 18:36:45.835145   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:45.835523   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:45.835553   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:45.835762   68429 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0815 18:36:45.840347   68429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:45.854348   68429 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-423062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:36:45.854471   68429 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:36:45.854527   68429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:45.899238   68429 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:36:45.899320   68429 ssh_runner.go:195] Run: which lz4
	I0815 18:36:45.903367   68429 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:36:45.907499   68429 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:36:45.907526   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (389136428 bytes)
	I0815 18:36:47.317850   68429 crio.go:462] duration metric: took 1.414524229s to copy over tarball
	I0815 18:36:47.317929   68429 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:36:49.443172   68429 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.125212316s)
	I0815 18:36:49.443206   68429 crio.go:469] duration metric: took 2.125324606s to extract the tarball
	I0815 18:36:49.443215   68429 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:36:49.483693   68429 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:36:49.535588   68429 crio.go:514] all images are preloaded for cri-o runtime.
	I0815 18:36:49.535617   68429 cache_images.go:84] Images are preloaded, skipping loading
	I0815 18:36:49.535627   68429 kubeadm.go:934] updating node { 192.168.61.7 8444 v1.31.0 crio true true} ...
	I0815 18:36:49.535753   68429 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-423062 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:36:49.535843   68429 ssh_runner.go:195] Run: crio config
	I0815 18:36:49.587186   68429 cni.go:84] Creating CNI manager for ""
	I0815 18:36:49.587215   68429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:49.587232   68429 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:36:49.587257   68429 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.7 APIServerPort:8444 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-423062 NodeName:default-k8s-diff-port-423062 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:36:49.587447   68429 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.7
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-423062"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:36:49.587520   68429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:36:49.598312   68429 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:36:49.598376   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:36:49.608382   68429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0815 18:36:49.624449   68429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:36:49.647224   68429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0815 18:36:49.664848   68429 ssh_runner.go:195] Run: grep 192.168.61.7	control-plane.minikube.internal$ /etc/hosts
	I0815 18:36:49.668582   68429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:36:49.680786   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:49.804940   68429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:49.826104   68429 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062 for IP: 192.168.61.7
	I0815 18:36:49.826130   68429 certs.go:194] generating shared ca certs ...
	I0815 18:36:49.826147   68429 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:49.826281   68429 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:36:49.826322   68429 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:36:49.826331   68429 certs.go:256] generating profile certs ...
	I0815 18:36:49.826403   68429 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.key
	I0815 18:36:49.826461   68429 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.key.534debab
	I0815 18:36:49.826528   68429 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.key
	I0815 18:36:49.826667   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:36:49.826713   68429 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:36:49.826725   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:36:49.826748   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:36:49.826777   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:36:49.826810   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:36:49.826868   68429 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:36:49.827597   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:36:49.855678   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:36:49.891292   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:36:49.928612   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:36:49.961506   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0815 18:36:49.993955   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:36:50.019275   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:36:50.046773   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:36:50.074201   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:36:50.101491   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:36:50.125378   68429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:36:50.149974   68429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:36:50.166393   68429 ssh_runner.go:195] Run: openssl version
	I0815 18:36:50.172182   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:36:50.182755   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.187110   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.187155   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:36:50.192956   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:36:50.203680   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:36:50.214269   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.218876   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.218925   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:36:50.224463   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:36:50.234811   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:36:50.245585   68429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.250397   68429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.250446   68429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:36:50.256189   68429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:36:50.267342   68429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:36:50.272011   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:36:50.278217   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:36:50.284300   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:36:50.290402   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:36:50.296174   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:36:50.301957   68429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:36:50.307807   68429 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-423062 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.0 ClusterName:default-k8s-diff-port-423062 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:36:50.307910   68429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:36:50.307973   68429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:50.359833   68429 cri.go:89] found id: ""
	I0815 18:36:50.359923   68429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:36:50.370306   68429 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:36:50.370324   68429 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:36:50.370379   68429 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:36:50.379585   68429 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:36:50.380510   68429 kubeconfig.go:125] found "default-k8s-diff-port-423062" server: "https://192.168.61.7:8444"
	I0815 18:36:50.384136   68429 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:36:50.393393   68429 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.7
	I0815 18:36:50.393428   68429 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:36:50.393441   68429 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:36:50.393494   68429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:36:50.428085   68429 cri.go:89] found id: ""
	I0815 18:36:50.428162   68429 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:36:50.444032   68429 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:36:50.454927   68429 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:36:50.454948   68429 kubeadm.go:157] found existing configuration files:
	
	I0815 18:36:50.455000   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0815 18:36:50.464733   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:36:50.464797   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:36:50.473973   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0815 18:36:50.482861   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:36:50.482910   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:36:50.492213   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0815 18:36:50.501173   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:36:50.501230   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:36:50.510299   68429 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0815 18:36:50.519262   68429 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:36:50.519308   68429 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:36:50.528632   68429 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:36:50.537914   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:50.655230   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:48.268221   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:48.268790   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:48.268814   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:48.268736   69670 retry.go:31] will retry after 781.489196ms: waiting for machine to come up
	I0815 18:36:49.051824   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:49.052246   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:49.052277   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:49.052182   69670 retry.go:31] will retry after 1.393037007s: waiting for machine to come up
	I0815 18:36:50.446428   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:50.446860   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:50.446892   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:50.446800   69670 retry.go:31] will retry after 1.826779004s: waiting for machine to come up
	I0815 18:36:52.275716   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:52.276208   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:52.276231   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:52.276167   69670 retry.go:31] will retry after 1.746726312s: waiting for machine to come up
	I0815 18:36:50.458388   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:52.147996   68248 pod_ready.go:93] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:36:52.148026   68248 pod_ready.go:82] duration metric: took 12.698470185s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:52.148039   68248 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:54.153927   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:51.670903   68429 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.015612511s)
	I0815 18:36:51.670943   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:51.985806   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:52.069082   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:52.189200   68429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:36:52.189298   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:52.689767   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:53.189633   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:36:53.205099   68429 api_server.go:72] duration metric: took 1.015908263s to wait for apiserver process to appear ...
	I0815 18:36:53.205136   68429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:36:53.205162   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:53.205695   68429 api_server.go:269] stopped: https://192.168.61.7:8444/healthz: Get "https://192.168.61.7:8444/healthz": dial tcp 192.168.61.7:8444: connect: connection refused
	I0815 18:36:53.705285   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:55.721139   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:55.721177   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:55.721193   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:55.750790   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:36:55.750825   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:36:56.205675   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:56.212464   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:56.212509   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:56.705700   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:56.716232   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:36:56.716277   68429 api_server.go:103] status: https://192.168.61.7:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:36:57.205663   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:36:57.211081   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0815 18:36:57.217736   68429 api_server.go:141] control plane version: v1.31.0
	I0815 18:36:57.217763   68429 api_server.go:131] duration metric: took 4.012620084s to wait for apiserver health ...
	I0815 18:36:57.217772   68429 cni.go:84] Creating CNI manager for ""
	I0815 18:36:57.217778   68429 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:36:57.219455   68429 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:36:54.025067   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:54.025508   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:54.025535   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:54.025462   69670 retry.go:31] will retry after 2.693215306s: waiting for machine to come up
	I0815 18:36:56.721740   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:36:56.722139   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:36:56.722178   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:36:56.722070   69670 retry.go:31] will retry after 3.370623363s: waiting for machine to come up
	I0815 18:36:57.220672   68429 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:36:57.241710   68429 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:36:57.262714   68429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:36:57.272766   68429 system_pods.go:59] 8 kube-system pods found
	I0815 18:36:57.272822   68429 system_pods.go:61] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:36:57.272836   68429 system_pods.go:61] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:36:57.272849   68429 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:36:57.272862   68429 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:36:57.272872   68429 system_pods.go:61] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:36:57.272887   68429 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:36:57.272896   68429 system_pods.go:61] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:36:57.272902   68429 system_pods.go:61] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:36:57.272913   68429 system_pods.go:74] duration metric: took 10.175415ms to wait for pod list to return data ...
	I0815 18:36:57.272924   68429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:36:57.276880   68429 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:36:57.276915   68429 node_conditions.go:123] node cpu capacity is 2
	I0815 18:36:57.276929   68429 node_conditions.go:105] duration metric: took 3.998879ms to run NodePressure ...
	I0815 18:36:57.276951   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:36:57.554251   68429 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:36:57.558062   68429 kubeadm.go:739] kubelet initialised
	I0815 18:36:57.558084   68429 kubeadm.go:740] duration metric: took 3.811943ms waiting for restarted kubelet to initialise ...
	I0815 18:36:57.558091   68429 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:57.562470   68429 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.567212   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.567232   68429 pod_ready.go:82] duration metric: took 4.742538ms for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.567240   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.567245   68429 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.571217   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.571237   68429 pod_ready.go:82] duration metric: took 3.984908ms for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.571247   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.571255   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.575456   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.575494   68429 pod_ready.go:82] duration metric: took 4.232215ms for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.575507   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.575515   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:57.665876   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.665902   68429 pod_ready.go:82] duration metric: took 90.37918ms for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:57.665914   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:57.665921   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.066377   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-proxy-bnxv7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.066402   68429 pod_ready.go:82] duration metric: took 400.475025ms for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.066411   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-proxy-bnxv7" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.066426   68429 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.465739   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.465767   68429 pod_ready.go:82] duration metric: took 399.331024ms for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.465779   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.465787   68429 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	I0815 18:36:58.866772   68429 pod_ready.go:98] node "default-k8s-diff-port-423062" hosting pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.866798   68429 pod_ready.go:82] duration metric: took 401.001046ms for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	E0815 18:36:58.866809   68429 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-423062" hosting pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:36:58.866817   68429 pod_ready.go:39] duration metric: took 1.308717049s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:36:58.866835   68429 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:36:58.878274   68429 ops.go:34] apiserver oom_adj: -16
	I0815 18:36:58.878298   68429 kubeadm.go:597] duration metric: took 8.507965813s to restartPrimaryControlPlane
	I0815 18:36:58.878308   68429 kubeadm.go:394] duration metric: took 8.570508558s to StartCluster
	I0815 18:36:58.878327   68429 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:58.878499   68429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:36:58.879927   68429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:36:58.880213   68429 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.7 Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:36:58.880262   68429 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:36:58.880339   68429 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-423062"
	I0815 18:36:58.880375   68429 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-423062"
	I0815 18:36:58.880374   68429 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-423062"
	W0815 18:36:58.880383   68429 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:36:58.880367   68429 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-423062"
	I0815 18:36:58.880403   68429 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-423062"
	W0815 18:36:58.880410   68429 addons.go:243] addon metrics-server should already be in state true
	I0815 18:36:58.880414   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.880422   68429 config.go:182] Loaded profile config "default-k8s-diff-port-423062": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:36:58.880428   68429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-423062"
	I0815 18:36:58.880434   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.880772   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880778   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880801   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.880820   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.880826   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.880855   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.882047   68429 out.go:177] * Verifying Kubernetes components...
	I0815 18:36:58.883440   68429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:36:58.895575   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
	I0815 18:36:58.895577   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37567
	I0815 18:36:58.895739   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39491
	I0815 18:36:58.896031   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896063   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896121   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.896511   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896529   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896612   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896631   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896749   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.896768   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.896917   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.896963   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.897099   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.897132   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.897483   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.897527   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.897535   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.897558   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.900773   68429 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-423062"
	W0815 18:36:58.900796   68429 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:36:58.900825   68429 host.go:66] Checking if "default-k8s-diff-port-423062" exists ...
	I0815 18:36:58.901206   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.901238   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.912877   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42377
	I0815 18:36:58.912903   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37245
	I0815 18:36:58.913271   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.913344   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.913835   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.913845   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.913852   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.913862   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.914177   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.914218   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.914361   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.914408   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.916165   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.916601   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.918553   68429 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:36:58.918560   68429 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:36:56.154697   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:58.654414   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:36:58.919539   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44177
	I0815 18:36:58.919773   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:36:58.919790   68429 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:36:58.919809   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.919884   68429 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:36:58.919900   68429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:36:58.919916   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.919945   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.920330   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.920343   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.920777   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.921363   68429 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:36:58.921401   68429 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:36:58.923262   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.923629   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.923656   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.923684   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.924108   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.924256   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.924319   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.924337   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.924501   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.924564   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.924688   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:58.924773   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.924944   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.925266   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:58.938064   68429 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38697
	I0815 18:36:58.938411   68429 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:36:58.938762   68429 main.go:141] libmachine: Using API Version  1
	I0815 18:36:58.938782   68429 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:36:58.939057   68429 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:36:58.939214   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetState
	I0815 18:36:58.941134   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .DriverName
	I0815 18:36:58.941395   68429 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:36:58.941414   68429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:36:58.941436   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHHostname
	I0815 18:36:58.943936   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.944331   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:83:9a:f2", ip: ""} in network mk-default-k8s-diff-port-423062: {Iface:virbr3 ExpiryTime:2024-08-15 19:29:03 +0000 UTC Type:0 Mac:52:54:00:83:9a:f2 Iaid: IPaddr:192.168.61.7 Prefix:24 Hostname:default-k8s-diff-port-423062 Clientid:01:52:54:00:83:9a:f2}
	I0815 18:36:58.944355   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | domain default-k8s-diff-port-423062 has defined IP address 192.168.61.7 and MAC address 52:54:00:83:9a:f2 in network mk-default-k8s-diff-port-423062
	I0815 18:36:58.944594   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHPort
	I0815 18:36:58.944765   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHKeyPath
	I0815 18:36:58.944900   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .GetSSHUsername
	I0815 18:36:58.944977   68429 sshutil.go:53] new ssh client: &{IP:192.168.61.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/default-k8s-diff-port-423062/id_rsa Username:docker}
	I0815 18:36:59.069466   68429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:36:59.090259   68429 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-423062" to be "Ready" ...
	I0815 18:36:59.203591   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:36:59.232676   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:36:59.232705   68429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:36:59.273079   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:36:59.287625   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:36:59.287653   68429 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:36:59.359798   68429 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:36:59.359821   68429 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:36:59.406350   68429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:00.373429   68429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.16980511s)
	I0815 18:37:00.373477   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373495   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373501   68429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.10037967s)
	I0815 18:37:00.373546   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373563   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373787   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.373805   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.373848   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.373852   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.373863   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.373866   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.373890   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373903   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.373879   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.373937   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.374313   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.374322   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.374326   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.374344   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.374355   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.379434   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.379450   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.379666   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.379679   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.389853   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.389872   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.390148   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.390152   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.390173   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.390181   68429 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:00.390189   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) Calling .Close
	I0815 18:37:00.390396   68429 main.go:141] libmachine: (default-k8s-diff-port-423062) DBG | Closing plugin on server side
	I0815 18:37:00.390447   68429 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:00.390461   68429 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:00.390475   68429 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-423062"
	I0815 18:37:00.392530   68429 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 18:37:00.393703   68429 addons.go:510] duration metric: took 1.51344438s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 18:37:00.093896   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:00.094391   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | unable to find current IP address of domain old-k8s-version-278865 in network mk-old-k8s-version-278865
	I0815 18:37:00.094453   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | I0815 18:37:00.094333   69670 retry.go:31] will retry after 2.855023319s: waiting for machine to come up
	I0815 18:37:04.297557   67936 start.go:364] duration metric: took 52.755115386s to acquireMachinesLock for "no-preload-599042"
	I0815 18:37:04.297614   67936 start.go:96] Skipping create...Using existing machine configuration
	I0815 18:37:04.297639   67936 fix.go:54] fixHost starting: 
	I0815 18:37:04.298066   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:04.298096   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:04.317897   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42493
	I0815 18:37:04.318309   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:04.318797   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:04.318822   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:04.319191   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:04.319388   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:04.319543   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:04.320970   67936 fix.go:112] recreateIfNeeded on no-preload-599042: state=Stopped err=<nil>
	I0815 18:37:04.320994   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	W0815 18:37:04.321164   67936 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 18:37:04.322689   67936 out.go:177] * Restarting existing kvm2 VM for "no-preload-599042" ...
	I0815 18:37:00.654833   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:03.154235   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:02.950449   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950903   68713 main.go:141] libmachine: (old-k8s-version-278865) Found IP for machine: 192.168.39.89
	I0815 18:37:02.950931   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has current primary IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.950941   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserving static IP address...
	I0815 18:37:02.951319   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.951356   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | skip adding static IP to network mk-old-k8s-version-278865 - found existing host DHCP lease matching {name: "old-k8s-version-278865", mac: "52:54:00:b7:18:0a", ip: "192.168.39.89"}
	I0815 18:37:02.951376   68713 main.go:141] libmachine: (old-k8s-version-278865) Reserved static IP address: 192.168.39.89
	I0815 18:37:02.951393   68713 main.go:141] libmachine: (old-k8s-version-278865) Waiting for SSH to be available...
	I0815 18:37:02.951424   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Getting to WaitForSSH function...
	I0815 18:37:02.953498   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:02.953804   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:02.953927   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH client type: external
	I0815 18:37:02.953957   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa (-rw-------)
	I0815 18:37:02.953989   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:37:02.954001   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | About to run SSH command:
	I0815 18:37:02.954009   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | exit 0
	I0815 18:37:03.076431   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | SSH cmd err, output: <nil>: 
	I0815 18:37:03.076748   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetConfigRaw
	I0815 18:37:03.077325   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.079733   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080100   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.080132   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.080332   68713 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/config.json ...
	I0815 18:37:03.080537   68713 machine.go:93] provisionDockerMachine start ...
	I0815 18:37:03.080554   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:03.080717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.082778   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083140   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.083168   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.083331   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.083482   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083612   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.083730   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.083881   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.084067   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.084078   68713 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:37:03.188779   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:37:03.188813   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189045   68713 buildroot.go:166] provisioning hostname "old-k8s-version-278865"
	I0815 18:37:03.189069   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.189284   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.191858   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192171   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.192192   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.192328   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.192533   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192676   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.192822   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.193015   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.193180   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.193192   68713 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-278865 && echo "old-k8s-version-278865" | sudo tee /etc/hostname
	I0815 18:37:03.313099   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-278865
	
	I0815 18:37:03.313129   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.315840   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316196   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.316226   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.316378   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.316608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316760   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.316885   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.317001   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.317184   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.317207   68713 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-278865' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-278865/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-278865' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:37:03.429897   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:37:03.429934   68713 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:37:03.429962   68713 buildroot.go:174] setting up certificates
	I0815 18:37:03.429972   68713 provision.go:84] configureAuth start
	I0815 18:37:03.429983   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetMachineName
	I0815 18:37:03.430274   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:03.432724   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433053   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.433083   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.433212   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.435181   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435514   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.435543   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.435657   68713 provision.go:143] copyHostCerts
	I0815 18:37:03.435715   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:37:03.435736   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:37:03.435804   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:37:03.435919   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:37:03.435929   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:37:03.435959   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:37:03.436045   68713 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:37:03.436055   68713 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:37:03.436082   68713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:37:03.436170   68713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-278865 san=[127.0.0.1 192.168.39.89 localhost minikube old-k8s-version-278865]
	I0815 18:37:03.604924   68713 provision.go:177] copyRemoteCerts
	I0815 18:37:03.604979   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:37:03.605003   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.607328   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607616   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.607634   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.607821   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.608016   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.608171   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.608429   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:03.690560   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:37:03.714632   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 18:37:03.737805   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:37:03.762338   68713 provision.go:87] duration metric: took 332.353741ms to configureAuth
	I0815 18:37:03.762371   68713 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:37:03.762543   68713 config.go:182] Loaded profile config "old-k8s-version-278865": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:37:03.762608   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:03.765626   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.765988   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:03.766018   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:03.766211   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:03.766380   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766574   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:03.766712   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:03.766897   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:03.767053   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:03.767069   68713 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:37:04.050635   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:37:04.050663   68713 machine.go:96] duration metric: took 970.113556ms to provisionDockerMachine
	I0815 18:37:04.050674   68713 start.go:293] postStartSetup for "old-k8s-version-278865" (driver="kvm2")
	I0815 18:37:04.050685   68713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:37:04.050717   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.051048   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:37:04.051081   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.053709   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054095   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.054124   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.054432   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.054622   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.054774   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.054914   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.139381   68713 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:37:04.145097   68713 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:37:04.145124   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:37:04.145201   68713 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:37:04.145298   68713 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:37:04.145421   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:37:04.156166   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:04.181562   68713 start.go:296] duration metric: took 130.872499ms for postStartSetup
	I0815 18:37:04.181605   68713 fix.go:56] duration metric: took 19.879821037s for fixHost
	I0815 18:37:04.181629   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.184268   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184652   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.184682   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.184917   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.185151   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185345   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.185502   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.185677   68713 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:04.185925   68713 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I0815 18:37:04.185938   68713 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:37:04.297391   68713 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747024.271483326
	
	I0815 18:37:04.297413   68713 fix.go:216] guest clock: 1723747024.271483326
	I0815 18:37:04.297423   68713 fix.go:229] Guest: 2024-08-15 18:37:04.271483326 +0000 UTC Remote: 2024-08-15 18:37:04.181610291 +0000 UTC m=+251.426055371 (delta=89.873035ms)
	I0815 18:37:04.297448   68713 fix.go:200] guest clock delta is within tolerance: 89.873035ms
	I0815 18:37:04.297455   68713 start.go:83] releasing machines lock for "old-k8s-version-278865", held for 19.99571173s
	I0815 18:37:04.297504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.297818   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:04.300970   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301425   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.301455   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.301609   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302194   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302404   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .DriverName
	I0815 18:37:04.302495   68713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:37:04.302545   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.302679   68713 ssh_runner.go:195] Run: cat /version.json
	I0815 18:37:04.302705   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHHostname
	I0815 18:37:04.305673   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.305903   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306066   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306092   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306273   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:04.306301   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:04.306337   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306504   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306537   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHPort
	I0815 18:37:04.306657   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306664   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHKeyPath
	I0815 18:37:04.306827   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetSSHUsername
	I0815 18:37:04.306834   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.307009   68713 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/old-k8s-version-278865/id_rsa Username:docker}
	I0815 18:37:04.409319   68713 ssh_runner.go:195] Run: systemctl --version
	I0815 18:37:04.415576   68713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:37:04.565772   68713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:37:04.571909   68713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:37:04.571996   68713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:37:04.588400   68713 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:37:04.588427   68713 start.go:495] detecting cgroup driver to use...
	I0815 18:37:04.588528   68713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:37:04.604253   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:37:04.619003   68713 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:37:04.619051   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:37:04.632530   68713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:37:04.646080   68713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:37:04.763855   68713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:37:04.922470   68713 docker.go:233] disabling docker service ...
	I0815 18:37:04.922566   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:37:04.937301   68713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:37:04.950721   68713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:37:05.079767   68713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:37:05.210207   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:37:05.225569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:37:05.247998   68713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0815 18:37:05.248070   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.262851   68713 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:37:05.262924   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.274489   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.285901   68713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:05.298749   68713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:37:05.310052   68713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:37:05.320992   68713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:37:05.321073   68713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:37:05.340323   68713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:37:05.354069   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:05.483573   68713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:37:05.647020   68713 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:37:05.647094   68713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:37:05.653850   68713 start.go:563] Will wait 60s for crictl version
	I0815 18:37:05.653924   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:05.658476   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:37:05.697818   68713 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:37:05.697907   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.724931   68713 ssh_runner.go:195] Run: crio --version
	I0815 18:37:05.755831   68713 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0815 18:37:01.094934   68429 node_ready.go:53] node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:37:03.594364   68429 node_ready.go:53] node "default-k8s-diff-port-423062" has status "Ready":"False"
	I0815 18:37:05.756950   68713 main.go:141] libmachine: (old-k8s-version-278865) Calling .GetIP
	I0815 18:37:05.759791   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760188   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:18:0a", ip: ""} in network mk-old-k8s-version-278865: {Iface:virbr4 ExpiryTime:2024-08-15 19:26:35 +0000 UTC Type:0 Mac:52:54:00:b7:18:0a Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:old-k8s-version-278865 Clientid:01:52:54:00:b7:18:0a}
	I0815 18:37:05.760220   68713 main.go:141] libmachine: (old-k8s-version-278865) DBG | domain old-k8s-version-278865 has defined IP address 192.168.39.89 and MAC address 52:54:00:b7:18:0a in network mk-old-k8s-version-278865
	I0815 18:37:05.760468   68713 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0815 18:37:05.764753   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:05.777462   68713 kubeadm.go:883] updating cluster {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:37:05.777614   68713 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 18:37:05.777679   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:05.848895   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:05.848967   68713 ssh_runner.go:195] Run: which lz4
	I0815 18:37:05.853103   68713 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0815 18:37:05.858012   68713 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0815 18:37:05.858046   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0815 18:37:07.520567   68713 crio.go:462] duration metric: took 1.667489785s to copy over tarball
	I0815 18:37:07.520642   68713 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0815 18:37:04.324093   67936 main.go:141] libmachine: (no-preload-599042) Calling .Start
	I0815 18:37:04.324263   67936 main.go:141] libmachine: (no-preload-599042) Ensuring networks are active...
	I0815 18:37:04.325099   67936 main.go:141] libmachine: (no-preload-599042) Ensuring network default is active
	I0815 18:37:04.325778   67936 main.go:141] libmachine: (no-preload-599042) Ensuring network mk-no-preload-599042 is active
	I0815 18:37:04.326007   67936 main.go:141] libmachine: (no-preload-599042) Getting domain xml...
	I0815 18:37:04.328184   67936 main.go:141] libmachine: (no-preload-599042) Creating domain...
	I0815 18:37:05.626206   67936 main.go:141] libmachine: (no-preload-599042) Waiting to get IP...
	I0815 18:37:05.627374   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:05.627877   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:05.627935   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:05.627844   69876 retry.go:31] will retry after 199.774188ms: waiting for machine to come up
	I0815 18:37:05.829673   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:05.830213   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:05.830240   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:05.830170   69876 retry.go:31] will retry after 255.850483ms: waiting for machine to come up
	I0815 18:37:06.087766   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:06.088378   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:06.088405   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:06.088330   69876 retry.go:31] will retry after 351.231421ms: waiting for machine to come up
	I0815 18:37:06.440937   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:06.441597   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:06.441626   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:06.441572   69876 retry.go:31] will retry after 602.620924ms: waiting for machine to come up
	I0815 18:37:07.046269   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:07.046745   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:07.046769   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:07.046712   69876 retry.go:31] will retry after 578.450642ms: waiting for machine to come up
	I0815 18:37:07.627330   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:07.627832   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:07.627859   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:07.627791   69876 retry.go:31] will retry after 731.331176ms: waiting for machine to come up
	I0815 18:37:08.361310   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:08.361746   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:08.361776   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:08.361706   69876 retry.go:31] will retry after 1.089237688s: waiting for machine to come up
	I0815 18:37:05.157378   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:07.162990   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:09.654672   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:06.093822   68429 node_ready.go:49] node "default-k8s-diff-port-423062" has status "Ready":"True"
	I0815 18:37:06.093853   68429 node_ready.go:38] duration metric: took 7.003558244s for node "default-k8s-diff-port-423062" to be "Ready" ...
	I0815 18:37:06.093867   68429 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:06.103462   68429 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.111214   68429 pod_ready.go:93] pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:06.111235   68429 pod_ready.go:82] duration metric: took 7.746382ms for pod "coredns-6f6b679f8f-brc2r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.111244   68429 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.117713   68429 pod_ready.go:93] pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:06.117739   68429 pod_ready.go:82] duration metric: took 6.487608ms for pod "etcd-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:06.117750   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:08.126216   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:10.128095   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:10.534169   68713 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.013498464s)
	I0815 18:37:10.534194   68713 crio.go:469] duration metric: took 3.013602868s to extract the tarball
	I0815 18:37:10.534201   68713 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0815 18:37:10.578998   68713 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:10.619043   68713 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0815 18:37:10.619146   68713 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:37:10.619246   68713 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.619247   68713 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.619278   68713 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0815 18:37:10.619275   68713 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.619291   68713 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.619304   68713 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.619322   68713 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.619405   68713 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621367   68713 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:10.621384   68713 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0815 18:37:10.621468   68713 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.621500   68713 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:10.621596   68713 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.621646   68713 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.621706   68713 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.621897   68713 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.798617   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.828530   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0815 18:37:10.859528   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:10.918714   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:10.977028   68713 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0815 18:37:10.977073   68713 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:10.977119   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:10.980573   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:10.985503   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0815 18:37:10.990642   68713 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0815 18:37:10.990684   68713 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0815 18:37:10.990733   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.000388   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.007526   68713 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0815 18:37:11.007589   68713 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.007642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.008543   68713 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0815 18:37:11.008581   68713 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.008621   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.008642   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077224   68713 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0815 18:37:11.077269   68713 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.077228   68713 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0815 18:37:11.077347   68713 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.077322   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.077371   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111299   68713 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0815 18:37:11.111376   68713 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.111387   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.111421   68713 ssh_runner.go:195] Run: which crictl
	I0815 18:37:11.111471   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.111535   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.156942   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.156944   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.156997   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.263355   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.263448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.263455   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.263544   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0815 18:37:11.291407   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.312626   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.334606   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0815 18:37:11.427937   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0815 18:37:11.433739   68713 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:11.435371   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0815 18:37:11.439448   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0815 18:37:11.439541   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0815 18:37:11.450901   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0815 18:37:11.477906   68713 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0815 18:37:11.520009   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0815 18:37:11.572349   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0815 18:37:11.686243   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0815 18:37:11.686295   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0815 18:37:11.686325   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0815 18:37:11.686378   68713 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0815 18:37:11.686420   68713 cache_images.go:92] duration metric: took 1.067250234s to LoadCachedImages
	W0815 18:37:11.686494   68713 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0815 18:37:11.686508   68713 kubeadm.go:934] updating node { 192.168.39.89 8443 v1.20.0 crio true true} ...
	I0815 18:37:11.686620   68713 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-278865 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:37:11.686693   68713 ssh_runner.go:195] Run: crio config
	I0815 18:37:11.736781   68713 cni.go:84] Creating CNI manager for ""
	I0815 18:37:11.736808   68713 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:11.736824   68713 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:37:11.736851   68713 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.89 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-278865 NodeName:old-k8s-version-278865 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 18:37:11.737039   68713 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-278865"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.89
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.89"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:37:11.737120   68713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 18:37:11.747511   68713 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:37:11.747585   68713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:37:11.757850   68713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0815 18:37:11.775982   68713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:37:11.792938   68713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I0815 18:37:11.811576   68713 ssh_runner.go:195] Run: grep 192.168.39.89	control-plane.minikube.internal$ /etc/hosts
	I0815 18:37:11.815708   68713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:11.829992   68713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:11.983884   68713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:12.002603   68713 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865 for IP: 192.168.39.89
	I0815 18:37:12.002632   68713 certs.go:194] generating shared ca certs ...
	I0815 18:37:12.002682   68713 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.002867   68713 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:37:12.002926   68713 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:37:12.002942   68713 certs.go:256] generating profile certs ...
	I0815 18:37:12.025160   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.key
	I0815 18:37:12.025296   68713 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key.b00e3c1a
	I0815 18:37:12.025351   68713 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key
	I0815 18:37:12.025516   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:37:12.025578   68713 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:37:12.025591   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:37:12.025627   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:37:12.025661   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:37:12.025691   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:37:12.025746   68713 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:12.026614   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:37:12.066771   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:37:12.109649   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:37:12.176744   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:37:12.207990   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 18:37:12.244999   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 18:37:12.282338   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:37:12.308761   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 18:37:12.332316   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:37:12.355977   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:37:12.379169   68713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:37:12.405472   68713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:37:12.424110   68713 ssh_runner.go:195] Run: openssl version
	I0815 18:37:12.430231   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:37:12.441531   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.445971   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.446061   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:12.452134   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:37:12.466809   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:37:12.478211   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482659   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.482708   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:37:12.490225   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:37:12.504908   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:37:12.516825   68713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521854   68713 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.521911   68713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:37:12.527884   68713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:37:12.539398   68713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:37:12.544010   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:37:12.549918   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:37:12.555714   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:37:12.561895   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:37:12.567736   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:37:12.573664   68713 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:37:12.579510   68713 kubeadm.go:392] StartCluster: {Name:old-k8s-version-278865 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-278865 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.89 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:37:12.579627   68713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:37:12.579688   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.621503   68713 cri.go:89] found id: ""
	I0815 18:37:12.621576   68713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:37:12.632722   68713 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:37:12.632746   68713 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:37:12.632796   68713 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:37:12.643192   68713 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:37:12.644607   68713 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-278865" does not appear in /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:37:12.645629   68713 kubeconfig.go:62] /home/jenkins/minikube-integration/19450-13013/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-278865" cluster setting kubeconfig missing "old-k8s-version-278865" context setting]
	I0815 18:37:12.647073   68713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:12.653052   68713 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:37:12.665777   68713 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.89
	I0815 18:37:12.665808   68713 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:37:12.665821   68713 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:37:12.665872   68713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:12.713574   68713 cri.go:89] found id: ""
	I0815 18:37:12.713641   68713 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:37:12.731459   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:37:12.741769   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:37:12.741789   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:37:12.741833   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:37:12.750990   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:37:12.751049   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:37:12.761621   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:37:12.771204   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:37:12.771261   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:37:12.782012   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:37:09.452971   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:09.453451   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:09.453494   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:09.453393   69876 retry.go:31] will retry after 1.35461204s: waiting for machine to come up
	I0815 18:37:10.809664   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:10.810127   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:10.810158   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:10.810065   69876 retry.go:31] will retry after 1.709820883s: waiting for machine to come up
	I0815 18:37:12.521458   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:12.521988   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:12.522016   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:12.521930   69876 retry.go:31] will retry after 1.401971708s: waiting for machine to come up
	I0815 18:37:13.925401   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:13.925868   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:13.925898   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:13.925824   69876 retry.go:31] will retry after 2.768002946s: waiting for machine to come up
	I0815 18:37:11.655451   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:14.154561   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:12.400960   68429 pod_ready.go:103] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:13.128357   68429 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.128379   68429 pod_ready.go:82] duration metric: took 7.010621879s for pod "kube-apiserver-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.128389   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.136617   68429 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.136638   68429 pod_ready.go:82] duration metric: took 8.242471ms for pod "kube-controller-manager-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.136648   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.143530   68429 pod_ready.go:93] pod "kube-proxy-bnxv7" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.143551   68429 pod_ready.go:82] duration metric: took 6.895931ms for pod "kube-proxy-bnxv7" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.143563   68429 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.151691   68429 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:13.151721   68429 pod_ready.go:82] duration metric: took 8.149821ms for pod "kube-scheduler-default-k8s-diff-port-423062" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:13.151735   68429 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:15.158172   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:12.791928   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:37:12.791994   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:37:12.801858   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:37:12.811023   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:37:12.811083   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:37:12.822189   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:37:12.834293   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:12.974325   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.452192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.690442   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.798270   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:13.900783   68713 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:37:13.900877   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.401954   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:14.901809   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.401755   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:15.901010   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.401794   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.901149   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:17.401599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:16.694999   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:16.695488   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:16.695506   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:16.695430   69876 retry.go:31] will retry after 2.308386075s: waiting for machine to come up
	I0815 18:37:16.154692   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:18.653763   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:17.159197   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:19.159442   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:17.901511   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.401720   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:18.900976   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.401223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.901522   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:20.901573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.401767   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:21.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:22.401279   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:19.005581   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:19.005979   67936 main.go:141] libmachine: (no-preload-599042) DBG | unable to find current IP address of domain no-preload-599042 in network mk-no-preload-599042
	I0815 18:37:19.006008   67936 main.go:141] libmachine: (no-preload-599042) DBG | I0815 18:37:19.005930   69876 retry.go:31] will retry after 2.758801207s: waiting for machine to come up
	I0815 18:37:21.766860   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.767286   67936 main.go:141] libmachine: (no-preload-599042) Found IP for machine: 192.168.72.14
	I0815 18:37:21.767303   67936 main.go:141] libmachine: (no-preload-599042) Reserving static IP address...
	I0815 18:37:21.767314   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has current primary IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.767722   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "no-preload-599042", mac: "52:54:00:d1:54:6d", ip: "192.168.72.14"} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.767745   67936 main.go:141] libmachine: (no-preload-599042) Reserved static IP address: 192.168.72.14
	I0815 18:37:21.767757   67936 main.go:141] libmachine: (no-preload-599042) DBG | skip adding static IP to network mk-no-preload-599042 - found existing host DHCP lease matching {name: "no-preload-599042", mac: "52:54:00:d1:54:6d", ip: "192.168.72.14"}
	I0815 18:37:21.767768   67936 main.go:141] libmachine: (no-preload-599042) DBG | Getting to WaitForSSH function...
	I0815 18:37:21.767780   67936 main.go:141] libmachine: (no-preload-599042) Waiting for SSH to be available...
	I0815 18:37:21.769674   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.769950   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.769973   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.770072   67936 main.go:141] libmachine: (no-preload-599042) DBG | Using SSH client type: external
	I0815 18:37:21.770103   67936 main.go:141] libmachine: (no-preload-599042) DBG | Using SSH private key: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa (-rw-------)
	I0815 18:37:21.770134   67936 main.go:141] libmachine: (no-preload-599042) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.14 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0815 18:37:21.770147   67936 main.go:141] libmachine: (no-preload-599042) DBG | About to run SSH command:
	I0815 18:37:21.770162   67936 main.go:141] libmachine: (no-preload-599042) DBG | exit 0
	I0815 18:37:21.888536   67936 main.go:141] libmachine: (no-preload-599042) DBG | SSH cmd err, output: <nil>: 
	I0815 18:37:21.888900   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetConfigRaw
	I0815 18:37:21.889541   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:21.892351   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.892730   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.892760   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.892976   67936 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/config.json ...
	I0815 18:37:21.893181   67936 machine.go:93] provisionDockerMachine start ...
	I0815 18:37:21.893203   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:21.893404   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:21.895471   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.895774   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.895812   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.895967   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:21.896153   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.896334   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.896522   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:21.896697   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:21.896872   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:21.896884   67936 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 18:37:21.992598   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0815 18:37:21.992622   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:21.992856   67936 buildroot.go:166] provisioning hostname "no-preload-599042"
	I0815 18:37:21.992884   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:21.993095   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:21.995586   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.995902   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:21.995930   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:21.996051   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:21.996239   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.996375   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:21.996538   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:21.996691   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:21.996869   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:21.996884   67936 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-599042 && echo "no-preload-599042" | sudo tee /etc/hostname
	I0815 18:37:22.106513   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-599042
	
	I0815 18:37:22.106553   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.109655   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.110111   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.110143   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.110362   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.110548   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.110718   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.110838   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.110970   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.111141   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.111162   67936 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-599042' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-599042/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-599042' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 18:37:22.221858   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 18:37:22.221898   67936 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19450-13013/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-13013/.minikube}
	I0815 18:37:22.221924   67936 buildroot.go:174] setting up certificates
	I0815 18:37:22.221938   67936 provision.go:84] configureAuth start
	I0815 18:37:22.221956   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetMachineName
	I0815 18:37:22.222278   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:22.225058   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.225374   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.225410   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.225544   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.227539   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.227885   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.227929   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.228052   67936 provision.go:143] copyHostCerts
	I0815 18:37:22.228111   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem, removing ...
	I0815 18:37:22.228126   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem
	I0815 18:37:22.228190   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/ca.pem (1082 bytes)
	I0815 18:37:22.228273   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem, removing ...
	I0815 18:37:22.228282   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem
	I0815 18:37:22.228301   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/cert.pem (1123 bytes)
	I0815 18:37:22.228352   67936 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem, removing ...
	I0815 18:37:22.228359   67936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem
	I0815 18:37:22.228375   67936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-13013/.minikube/key.pem (1675 bytes)
	I0815 18:37:22.228428   67936 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem org=jenkins.no-preload-599042 san=[127.0.0.1 192.168.72.14 localhost minikube no-preload-599042]
	I0815 18:37:22.383520   67936 provision.go:177] copyRemoteCerts
	I0815 18:37:22.383578   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 18:37:22.383601   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.386048   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.386303   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.386338   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.386566   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.386722   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.386894   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.387036   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:22.470828   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 18:37:22.494929   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 18:37:22.519545   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0815 18:37:22.544417   67936 provision.go:87] duration metric: took 322.465732ms to configureAuth
	I0815 18:37:22.544442   67936 buildroot.go:189] setting minikube options for container-runtime
	I0815 18:37:22.544661   67936 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:37:22.544736   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.547284   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.547610   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.547641   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.547876   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.548076   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.548271   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.548413   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.548594   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.548795   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.548818   67936 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0815 18:37:22.803896   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0815 18:37:22.803924   67936 machine.go:96] duration metric: took 910.728961ms to provisionDockerMachine
	I0815 18:37:22.803935   67936 start.go:293] postStartSetup for "no-preload-599042" (driver="kvm2")
	I0815 18:37:22.803945   67936 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 18:37:22.803959   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:22.804274   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 18:37:22.804322   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.807041   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.807437   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.807467   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.807570   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.807747   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.807906   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.808002   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:22.887667   67936 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 18:37:22.892368   67936 info.go:137] Remote host: Buildroot 2023.02.9
	I0815 18:37:22.892393   67936 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/addons for local assets ...
	I0815 18:37:22.892480   67936 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-13013/.minikube/files for local assets ...
	I0815 18:37:22.892588   67936 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem -> 202192.pem in /etc/ssl/certs
	I0815 18:37:22.892681   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 18:37:22.901987   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:22.927782   67936 start.go:296] duration metric: took 123.834401ms for postStartSetup
	I0815 18:37:22.927823   67936 fix.go:56] duration metric: took 18.630196933s for fixHost
	I0815 18:37:22.927848   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:22.930378   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.930728   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:22.930755   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:22.930868   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:22.931043   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.931226   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:22.931386   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:22.931538   67936 main.go:141] libmachine: Using SSH client type: native
	I0815 18:37:22.931705   67936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.72.14 22 <nil> <nil>}
	I0815 18:37:22.931718   67936 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0815 18:37:23.029393   67936 main.go:141] libmachine: SSH cmd err, output: <nil>: 1723747042.997661196
	
	I0815 18:37:23.029423   67936 fix.go:216] guest clock: 1723747042.997661196
	I0815 18:37:23.029433   67936 fix.go:229] Guest: 2024-08-15 18:37:22.997661196 +0000 UTC Remote: 2024-08-15 18:37:22.927828036 +0000 UTC m=+353.975665928 (delta=69.83316ms)
	I0815 18:37:23.029455   67936 fix.go:200] guest clock delta is within tolerance: 69.83316ms
	I0815 18:37:23.029465   67936 start.go:83] releasing machines lock for "no-preload-599042", held for 18.731874864s
	I0815 18:37:23.029491   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.029730   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:23.031885   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.032242   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.032261   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.032449   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.032908   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.033062   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:23.033149   67936 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 18:37:23.033197   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:23.033303   67936 ssh_runner.go:195] Run: cat /version.json
	I0815 18:37:23.033322   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:23.035943   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.035987   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036327   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.036433   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:23.036463   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036482   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:23.036657   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:23.036836   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:23.036855   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:23.036966   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:23.037039   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:23.037119   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:23.037183   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:23.037242   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:23.117399   67936 ssh_runner.go:195] Run: systemctl --version
	I0815 18:37:23.138614   67936 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0815 18:37:23.287862   67936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0815 18:37:23.293943   67936 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0815 18:37:23.294013   67936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 18:37:23.310957   67936 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0815 18:37:23.310987   67936 start.go:495] detecting cgroup driver to use...
	I0815 18:37:23.311067   67936 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0815 18:37:23.326641   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0815 18:37:23.340650   67936 docker.go:217] disabling cri-docker service (if available) ...
	I0815 18:37:23.340708   67936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 18:37:23.355401   67936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 18:37:23.369033   67936 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 18:37:23.480891   67936 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 18:37:23.629690   67936 docker.go:233] disabling docker service ...
	I0815 18:37:23.629782   67936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 18:37:23.644372   67936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 18:37:23.658312   67936 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 18:37:23.779999   67936 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 18:37:23.902630   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 18:37:23.917453   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 18:37:23.935696   67936 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0815 18:37:23.935749   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.946031   67936 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0815 18:37:23.946106   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.956639   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.967148   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.978049   67936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 18:37:23.989000   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:23.999290   67936 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:24.017002   67936 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0815 18:37:24.027432   67936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 18:37:24.036714   67936 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0815 18:37:24.036770   67936 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0815 18:37:24.048956   67936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 18:37:24.058269   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:24.173548   67936 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0815 18:37:24.316383   67936 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0815 18:37:24.316462   67936 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0815 18:37:24.321726   67936 start.go:563] Will wait 60s for crictl version
	I0815 18:37:24.321803   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.325718   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 18:37:24.362995   67936 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0815 18:37:24.363099   67936 ssh_runner.go:195] Run: crio --version
	I0815 18:37:24.392678   67936 ssh_runner.go:195] Run: crio --version
	I0815 18:37:24.424128   67936 out.go:177] * Preparing Kubernetes v1.31.0 on CRI-O 1.29.1 ...
	I0815 18:37:20.654186   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:23.154893   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:21.658499   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:24.159865   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:22.901608   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.401519   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:23.901287   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.401831   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.901547   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.401220   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:25.901109   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.401441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:26.901515   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:27.401258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:24.425451   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetIP
	I0815 18:37:24.428263   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:24.428631   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:24.428656   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:24.428833   67936 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0815 18:37:24.433343   67936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:24.446011   67936 kubeadm.go:883] updating cluster {Name:no-preload-599042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 18:37:24.446123   67936 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 18:37:24.446168   67936 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 18:37:24.484321   67936 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0". assuming images are not preloaded.
	I0815 18:37:24.484346   67936 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0815 18:37:24.484417   67936 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:24.484429   67936 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.484444   67936 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.484470   67936 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.484472   67936 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.484581   67936 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.484583   67936 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0815 18:37:24.484585   67936 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.485836   67936 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.485844   67936 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0815 18:37:24.485852   67936 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.485846   67936 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.485836   67936 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.485837   67936 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.485846   67936 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:24.485906   67936 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.646217   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.653405   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.658441   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.662835   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.662858   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.681979   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.715361   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0815 18:37:24.722352   67936 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0815 18:37:24.722391   67936 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.722450   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.787439   67936 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0" does not exist at hash "045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1" in container runtime
	I0815 18:37:24.787486   67936 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.787530   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.810570   67936 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0815 18:37:24.810606   67936 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0" does not exist at hash "ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494" in container runtime
	I0815 18:37:24.810612   67936 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.810630   67936 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.810666   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.810667   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.841566   67936 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0" does not exist at hash "604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3" in container runtime
	I0815 18:37:24.841617   67936 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.841669   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.841698   67936 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0" does not exist at hash "1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94" in container runtime
	I0815 18:37:24.841743   67936 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:24.841800   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:24.950875   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:24.950918   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:24.950933   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:24.950989   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:24.951004   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:24.951052   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.079551   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:25.079601   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:25.079634   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:25.084852   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:25.084874   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.084910   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:25.216095   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0815 18:37:25.216235   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0
	I0815 18:37:25.216308   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0815 18:37:25.216384   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0
	I0815 18:37:25.216400   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0
	I0815 18:37:25.216431   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0
	I0815 18:37:25.336055   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0
	I0815 18:37:25.336126   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0
	I0815 18:37:25.336180   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.336222   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:25.336181   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0815 18:37:25.336320   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:25.352527   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0
	I0815 18:37:25.352566   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0815 18:37:25.352592   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0815 18:37:25.352639   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:25.352650   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:25.352702   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:25.355747   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0 (exists)
	I0815 18:37:25.355764   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.355769   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0 (exists)
	I0815 18:37:25.355797   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0
	I0815 18:37:25.355806   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0815 18:37:25.363222   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0815 18:37:25.363257   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0 (exists)
	I0815 18:37:25.363435   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0 (exists)
	I0815 18:37:25.476601   67936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:28.142118   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0: (2.786287506s)
	I0815 18:37:28.142134   67936 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.665496935s)
	I0815 18:37:28.142146   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0 from cache
	I0815 18:37:28.142177   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:28.142190   67936 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0815 18:37:28.142220   67936 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:28.142244   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0
	I0815 18:37:28.142259   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:37:25.155516   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:27.156071   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:29.157389   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:26.658491   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:28.659080   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:27.901777   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.401103   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:28.901746   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.401521   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.901691   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.401326   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:30.901672   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.401534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:31.901013   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:32.401696   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:29.598348   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0: (1.456076001s)
	I0815 18:37:29.598380   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0 from cache
	I0815 18:37:29.598404   67936 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:29.598407   67936 ssh_runner.go:235] Completed: which crictl: (1.456124508s)
	I0815 18:37:29.598451   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0815 18:37:29.598474   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:31.495864   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.897383444s)
	I0815 18:37:31.495897   67936 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.897403663s)
	I0815 18:37:31.495902   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0815 18:37:31.495931   67936 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:31.495968   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0815 18:37:31.495968   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:31.657799   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:34.156377   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:31.158308   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:33.159177   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:35.668218   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:32.901441   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:33.901095   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.401705   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:34.901020   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.401019   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.901094   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.400952   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:36.901717   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:37.401701   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:35.526372   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (4.030374686s)
	I0815 18:37:35.526410   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0815 18:37:35.526422   67936 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (4.030343547s)
	I0815 18:37:35.526438   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:35.526482   67936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:35.526483   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0
	I0815 18:37:35.570806   67936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0815 18:37:35.570906   67936 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:37.500059   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0: (1.973499408s)
	I0815 18:37:37.500098   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0 from cache
	I0815 18:37:37.500120   67936 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:37.500072   67936 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.929150036s)
	I0815 18:37:37.500208   67936 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0815 18:37:37.500161   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0
	I0815 18:37:36.157239   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:38.656856   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:38.158685   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:40.158728   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:37.901353   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.401426   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:38.901599   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.401173   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.901593   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.401758   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:40.901664   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.401698   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:41.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:42.401409   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:39.563532   67936 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0: (2.063281797s)
	I0815 18:37:39.563562   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0 from cache
	I0815 18:37:39.563595   67936 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:39.563642   67936 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0815 18:37:40.208180   67936 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19450-13013/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0815 18:37:40.208232   67936 cache_images.go:123] Successfully loaded all cached images
	I0815 18:37:40.208240   67936 cache_images.go:92] duration metric: took 15.723882738s to LoadCachedImages
	I0815 18:37:40.208252   67936 kubeadm.go:934] updating node { 192.168.72.14 8443 v1.31.0 crio true true} ...
	I0815 18:37:40.208416   67936 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-599042 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 18:37:40.208544   67936 ssh_runner.go:195] Run: crio config
	I0815 18:37:40.261526   67936 cni.go:84] Creating CNI manager for ""
	I0815 18:37:40.261545   67936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:40.261552   67936 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 18:37:40.261572   67936 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.14 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-599042 NodeName:no-preload-599042 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.14"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.14 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 18:37:40.261688   67936 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.14
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-599042"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.14
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.14"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 18:37:40.261742   67936 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 18:37:40.271844   67936 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 18:37:40.271921   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 18:37:40.280957   67936 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0815 18:37:40.297378   67936 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 18:37:40.313215   67936 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0815 18:37:40.329640   67936 ssh_runner.go:195] Run: grep 192.168.72.14	control-plane.minikube.internal$ /etc/hosts
	I0815 18:37:40.333331   67936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.14	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 18:37:40.344805   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:40.457352   67936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:40.475219   67936 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042 for IP: 192.168.72.14
	I0815 18:37:40.475238   67936 certs.go:194] generating shared ca certs ...
	I0815 18:37:40.475252   67936 certs.go:226] acquiring lock for ca certs: {Name:mkaf2a49c545ba9ac79ccda0cd19bc293b18915b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:40.475416   67936 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key
	I0815 18:37:40.475475   67936 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key
	I0815 18:37:40.475489   67936 certs.go:256] generating profile certs ...
	I0815 18:37:40.475591   67936 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.key
	I0815 18:37:40.475670   67936 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.key.15ba6898
	I0815 18:37:40.475714   67936 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.key
	I0815 18:37:40.475865   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem (1338 bytes)
	W0815 18:37:40.475904   67936 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219_empty.pem, impossibly tiny 0 bytes
	I0815 18:37:40.475917   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 18:37:40.475950   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/ca.pem (1082 bytes)
	I0815 18:37:40.475978   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/cert.pem (1123 bytes)
	I0815 18:37:40.476012   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/certs/key.pem (1675 bytes)
	I0815 18:37:40.476069   67936 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem (1708 bytes)
	I0815 18:37:40.476738   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 18:37:40.513554   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0815 18:37:40.549095   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 18:37:40.578010   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0815 18:37:40.612637   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0815 18:37:40.639974   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 18:37:40.672937   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 18:37:40.696889   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 18:37:40.721258   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/ssl/certs/202192.pem --> /usr/share/ca-certificates/202192.pem (1708 bytes)
	I0815 18:37:40.744104   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 18:37:40.766463   67936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-13013/.minikube/certs/20219.pem --> /usr/share/ca-certificates/20219.pem (1338 bytes)
	I0815 18:37:40.788628   67936 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 18:37:40.805346   67936 ssh_runner.go:195] Run: openssl version
	I0815 18:37:40.811193   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/202192.pem && ln -fs /usr/share/ca-certificates/202192.pem /etc/ssl/certs/202192.pem"
	I0815 18:37:40.822610   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.826918   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:19 /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.826969   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/202192.pem
	I0815 18:37:40.832544   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/202192.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 18:37:40.843338   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 18:37:40.854032   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.858512   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:06 /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.858563   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 18:37:40.864247   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 18:37:40.874724   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20219.pem && ln -fs /usr/share/ca-certificates/20219.pem /etc/ssl/certs/20219.pem"
	I0815 18:37:40.885538   67936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.889849   67936 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:19 /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.889899   67936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20219.pem
	I0815 18:37:40.895258   67936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/20219.pem /etc/ssl/certs/51391683.0"
	I0815 18:37:40.906841   67936 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 18:37:40.911629   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 18:37:40.918085   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 18:37:40.924194   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 18:37:40.930009   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 18:37:40.935634   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 18:37:40.941168   67936 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 18:37:40.946761   67936 kubeadm.go:392] StartCluster: {Name:no-preload-599042 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:no-preload-599042 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 18:37:40.946836   67936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0815 18:37:40.946874   67936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:40.990733   67936 cri.go:89] found id: ""
	I0815 18:37:40.990808   67936 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 18:37:41.002969   67936 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 18:37:41.002988   67936 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 18:37:41.003041   67936 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 18:37:41.013722   67936 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:37:41.015079   67936 kubeconfig.go:125] found "no-preload-599042" server: "https://192.168.72.14:8443"
	I0815 18:37:41.017905   67936 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 18:37:41.029240   67936 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.14
	I0815 18:37:41.029271   67936 kubeadm.go:1160] stopping kube-system containers ...
	I0815 18:37:41.029284   67936 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0815 18:37:41.029326   67936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 18:37:41.064689   67936 cri.go:89] found id: ""
	I0815 18:37:41.064754   67936 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0815 18:37:41.085195   67936 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:37:41.096355   67936 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:37:41.096375   67936 kubeadm.go:157] found existing configuration files:
	
	I0815 18:37:41.096425   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:37:41.106887   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:37:41.106941   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:37:41.117599   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:37:41.127956   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:37:41.128020   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:37:41.137384   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:37:41.146075   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:37:41.146122   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:37:41.156417   67936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:37:41.165287   67936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:37:41.165325   67936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:37:41.174245   67936 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:37:41.183335   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:41.314804   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.422591   67936 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.107749325s)
	I0815 18:37:42.422628   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.642065   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.710265   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:42.791233   67936 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:37:42.791334   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.291538   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.791682   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.831611   67936 api_server.go:72] duration metric: took 1.040390925s to wait for apiserver process to appear ...
	I0815 18:37:43.831641   67936 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:37:43.831662   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:43.832110   67936 api_server.go:269] stopped: https://192.168.72.14:8443/healthz: Get "https://192.168.72.14:8443/healthz": dial tcp 192.168.72.14:8443: connect: connection refused
	I0815 18:37:41.154701   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:43.655756   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:42.661385   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:45.158918   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:42.901106   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.401146   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:43.901869   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.401483   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.901302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.401505   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:45.901504   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.401025   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:46.901713   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:47.401588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:44.332554   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.112640   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0815 18:37:47.112668   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0815 18:37:47.112681   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.244211   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.244246   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:47.332375   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.339129   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.339153   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:47.831731   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:47.836308   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:47.836330   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:48.331914   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:48.336314   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0815 18:37:48.336347   67936 api_server.go:103] status: https://192.168.72.14:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0815 18:37:48.831862   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:37:48.836012   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 200:
	ok
	I0815 18:37:48.842971   67936 api_server.go:141] control plane version: v1.31.0
	I0815 18:37:48.842996   67936 api_server.go:131] duration metric: took 5.011346791s to wait for apiserver health ...
	I0815 18:37:48.843008   67936 cni.go:84] Creating CNI manager for ""
	I0815 18:37:48.843015   67936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:37:48.844939   67936 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:37:48.846262   67936 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:37:48.857335   67936 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:37:48.876370   67936 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:37:48.886582   67936 system_pods.go:59] 8 kube-system pods found
	I0815 18:37:48.886628   67936 system_pods.go:61] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:37:48.886640   67936 system_pods.go:61] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0815 18:37:48.886653   67936 system_pods.go:61] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0815 18:37:48.886666   67936 system_pods.go:61] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0815 18:37:48.886679   67936 system_pods.go:61] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0815 18:37:48.886691   67936 system_pods.go:61] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0815 18:37:48.886701   67936 system_pods.go:61] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:37:48.886711   67936 system_pods.go:61] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0815 18:37:48.886722   67936 system_pods.go:74] duration metric: took 10.329234ms to wait for pod list to return data ...
	I0815 18:37:48.886736   67936 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:37:48.890525   67936 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:37:48.890560   67936 node_conditions.go:123] node cpu capacity is 2
	I0815 18:37:48.890571   67936 node_conditions.go:105] duration metric: took 3.828616ms to run NodePressure ...
	I0815 18:37:48.890590   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0815 18:37:46.155548   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:48.655549   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:49.183845   67936 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0815 18:37:49.188602   67936 kubeadm.go:739] kubelet initialised
	I0815 18:37:49.188629   67936 kubeadm.go:740] duration metric: took 4.755371ms waiting for restarted kubelet to initialise ...
	I0815 18:37:49.188639   67936 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:49.193101   67936 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.199195   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.199215   67936 pod_ready.go:82] duration metric: took 6.088761ms for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.199226   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.199236   67936 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.205076   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "etcd-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.205095   67936 pod_ready.go:82] duration metric: took 5.848521ms for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.205105   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "etcd-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.205111   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.210559   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-apiserver-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.210578   67936 pod_ready.go:82] duration metric: took 5.449861ms for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.210587   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-apiserver-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.210594   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.281799   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.281828   67936 pod_ready.go:82] duration metric: took 71.206144ms for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.281840   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.281850   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:49.680097   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-proxy-bwb9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.680121   67936 pod_ready.go:82] duration metric: took 398.261641ms for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:49.680131   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-proxy-bwb9h" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:49.680136   67936 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:50.080391   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "kube-scheduler-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.080415   67936 pod_ready.go:82] duration metric: took 400.272871ms for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:50.080425   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "kube-scheduler-no-preload-599042" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.080430   67936 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:50.482715   67936 pod_ready.go:98] node "no-preload-599042" hosting pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.482744   67936 pod_ready.go:82] duration metric: took 402.304556ms for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	E0815 18:37:50.482753   67936 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-599042" hosting pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:50.482761   67936 pod_ready.go:39] duration metric: took 1.294109816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:50.482779   67936 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:37:50.495888   67936 ops.go:34] apiserver oom_adj: -16
	I0815 18:37:50.495912   67936 kubeadm.go:597] duration metric: took 9.4929178s to restartPrimaryControlPlane
	I0815 18:37:50.495924   67936 kubeadm.go:394] duration metric: took 9.549167115s to StartCluster
	I0815 18:37:50.495943   67936 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:50.496020   67936 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:37:50.497743   67936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:37:50.497976   67936 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.14 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:37:50.498166   67936 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:37:50.498225   67936 config.go:182] Loaded profile config "no-preload-599042": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:37:50.498251   67936 addons.go:69] Setting storage-provisioner=true in profile "no-preload-599042"
	I0815 18:37:50.498266   67936 addons.go:69] Setting default-storageclass=true in profile "no-preload-599042"
	I0815 18:37:50.498287   67936 addons.go:234] Setting addon storage-provisioner=true in "no-preload-599042"
	I0815 18:37:50.498303   67936 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-599042"
	W0815 18:37:50.498311   67936 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:37:50.498343   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.498708   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.498733   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.498745   67936 addons.go:69] Setting metrics-server=true in profile "no-preload-599042"
	I0815 18:37:50.498753   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.498783   67936 addons.go:234] Setting addon metrics-server=true in "no-preload-599042"
	W0815 18:37:50.498795   67936 addons.go:243] addon metrics-server should already be in state true
	I0815 18:37:50.498734   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.499070   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.499350   67936 out.go:177] * Verifying Kubernetes components...
	I0815 18:37:50.499436   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.499467   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.500629   67936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:37:50.514727   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43857
	I0815 18:37:50.514956   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36053
	I0815 18:37:50.515112   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.515379   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.515622   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.515639   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.515844   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.515866   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.516028   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.516697   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.516741   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.516854   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.517455   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.517487   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.517879   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39783
	I0815 18:37:50.518180   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.518645   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.518666   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.518975   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.519155   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.522283   67936 addons.go:234] Setting addon default-storageclass=true in "no-preload-599042"
	W0815 18:37:50.522301   67936 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:37:50.522321   67936 host.go:66] Checking if "no-preload-599042" exists ...
	I0815 18:37:50.522589   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.522616   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.533306   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42747
	I0815 18:37:50.533891   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.534378   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.534403   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.535077   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.535251   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.536333   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42567
	I0815 18:37:50.536960   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.537421   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.537484   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.537500   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.537581   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40905
	I0815 18:37:50.537832   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.537992   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.538044   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.538964   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.538983   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.539442   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.539494   67936 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:37:50.540127   67936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:37:50.540138   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.540166   67936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:37:50.540633   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:37:50.540653   67936 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:37:50.540673   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.541641   67936 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:37:47.658449   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:50.159642   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:50.542848   67936 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:37:50.542867   67936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:37:50.542883   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.544059   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.544644   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.544669   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.544879   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.545056   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.545226   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.545363   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.545609   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.545957   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.545984   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.546188   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.546350   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.546459   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.546563   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.576049   67936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37711
	I0815 18:37:50.576398   67936 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:37:50.576963   67936 main.go:141] libmachine: Using API Version  1
	I0815 18:37:50.576991   67936 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:37:50.577315   67936 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:37:50.577536   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetState
	I0815 18:37:50.579041   67936 main.go:141] libmachine: (no-preload-599042) Calling .DriverName
	I0815 18:37:50.579244   67936 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:37:50.579259   67936 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:37:50.579273   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHHostname
	I0815 18:37:50.583471   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.583857   67936 main.go:141] libmachine: (no-preload-599042) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:54:6d", ip: ""} in network mk-no-preload-599042: {Iface:virbr1 ExpiryTime:2024-08-15 19:37:16 +0000 UTC Type:0 Mac:52:54:00:d1:54:6d Iaid: IPaddr:192.168.72.14 Prefix:24 Hostname:no-preload-599042 Clientid:01:52:54:00:d1:54:6d}
	I0815 18:37:50.583884   67936 main.go:141] libmachine: (no-preload-599042) DBG | domain no-preload-599042 has defined IP address 192.168.72.14 and MAC address 52:54:00:d1:54:6d in network mk-no-preload-599042
	I0815 18:37:50.583984   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHPort
	I0815 18:37:50.584140   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHKeyPath
	I0815 18:37:50.584298   67936 main.go:141] libmachine: (no-preload-599042) Calling .GetSSHUsername
	I0815 18:37:50.584431   67936 sshutil.go:53] new ssh client: &{IP:192.168.72.14 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/no-preload-599042/id_rsa Username:docker}
	I0815 18:37:50.711232   67936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:37:50.738297   67936 node_ready.go:35] waiting up to 6m0s for node "no-preload-599042" to be "Ready" ...
	I0815 18:37:50.787041   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:37:50.876459   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:37:50.926707   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:37:50.926727   67936 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:37:50.967734   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:37:50.967764   67936 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:37:50.994557   67936 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:50.994580   67936 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:37:51.018573   67936 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:37:51.217167   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.217199   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.217511   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.217561   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.217570   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.217579   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.217592   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.217846   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.217889   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.217900   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.223755   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.223774   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.224006   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.224024   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.794888   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.794919   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.795198   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.795227   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.795240   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.795256   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.795267   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.795503   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.795521   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936158   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.936178   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.936438   67936 main.go:141] libmachine: (no-preload-599042) DBG | Closing plugin on server side
	I0815 18:37:51.936467   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.936505   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936519   67936 main.go:141] libmachine: Making call to close driver server
	I0815 18:37:51.936528   67936 main.go:141] libmachine: (no-preload-599042) Calling .Close
	I0815 18:37:51.936754   67936 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:37:51.936773   67936 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:37:51.936785   67936 addons.go:475] Verifying addon metrics-server=true in "no-preload-599042"
	I0815 18:37:51.938619   67936 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0815 18:37:47.901026   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.401023   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:48.901661   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.401358   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:49.901410   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.401040   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:50.901695   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.401365   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.901733   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:52.401439   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:51.939743   67936 addons.go:510] duration metric: took 1.441583595s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0815 18:37:52.742152   67936 node_ready.go:53] node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:51.155350   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:53.654487   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:52.658151   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:54.658269   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:52.901361   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.401417   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:53.901380   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.401820   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:54.901113   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.401270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.900941   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:56.901834   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:57.401496   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:55.242506   67936 node_ready.go:53] node "no-preload-599042" has status "Ready":"False"
	I0815 18:37:57.742723   67936 node_ready.go:49] node "no-preload-599042" has status "Ready":"True"
	I0815 18:37:57.742746   67936 node_ready.go:38] duration metric: took 7.00442012s for node "no-preload-599042" to be "Ready" ...
	I0815 18:37:57.742764   67936 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:37:57.747927   67936 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:57.752478   67936 pod_ready.go:93] pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:57.752513   67936 pod_ready.go:82] duration metric: took 4.560553ms for pod "coredns-6f6b679f8f-kpq9m" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:57.752524   67936 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.760896   67936 pod_ready.go:93] pod "etcd-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.760924   67936 pod_ready.go:82] duration metric: took 1.008390436s for pod "etcd-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.760937   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.774529   67936 pod_ready.go:93] pod "kube-apiserver-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.774557   67936 pod_ready.go:82] duration metric: took 13.611063ms for pod "kube-apiserver-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.774568   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.793851   67936 pod_ready.go:93] pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.793873   67936 pod_ready.go:82] duration metric: took 19.297089ms for pod "kube-controller-manager-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.793885   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.943096   67936 pod_ready.go:93] pod "kube-proxy-bwb9h" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:58.943120   67936 pod_ready.go:82] duration metric: took 149.227014ms for pod "kube-proxy-bwb9h" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:58.943129   67936 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:56.154874   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:58.655280   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:57.158586   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:59.159257   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:37:57.901938   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.401246   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:58.900950   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.400984   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.401707   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:00.901455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.401453   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:01.901613   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:02.401302   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:37:59.342426   67936 pod_ready.go:93] pod "kube-scheduler-no-preload-599042" in "kube-system" namespace has status "Ready":"True"
	I0815 18:37:59.342447   67936 pod_ready.go:82] duration metric: took 399.312035ms for pod "kube-scheduler-no-preload-599042" in "kube-system" namespace to be "Ready" ...
	I0815 18:37:59.342460   67936 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	I0815 18:38:01.349419   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:03.848558   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:01.154194   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:03.154779   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:01.658502   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:04.158895   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:02.901914   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.401357   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:03.901258   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.400961   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:04.901697   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.401852   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.901115   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.401170   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:06.901694   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:07.401816   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:05.849586   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:08.349057   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:05.155847   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:07.653607   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:09.654245   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:06.658092   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:08.659361   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:07.900966   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.401136   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:08.901534   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.400982   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:09.901126   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.401120   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.901175   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.401704   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:11.901710   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:12.401712   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:10.349443   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:12.349942   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:11.655212   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:14.154508   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:11.158562   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:13.657985   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:15.658088   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:12.901680   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.401532   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:13.901198   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:13.901295   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:13.938743   68713 cri.go:89] found id: ""
	I0815 18:38:13.938770   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.938778   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:13.938786   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:13.938843   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:13.971997   68713 cri.go:89] found id: ""
	I0815 18:38:13.972029   68713 logs.go:276] 0 containers: []
	W0815 18:38:13.972041   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:13.972048   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:13.972111   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:14.006793   68713 cri.go:89] found id: ""
	I0815 18:38:14.006825   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.006836   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:14.006844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:14.006903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:14.041546   68713 cri.go:89] found id: ""
	I0815 18:38:14.041575   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.041587   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:14.041595   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:14.041680   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:14.077614   68713 cri.go:89] found id: ""
	I0815 18:38:14.077639   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.077648   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:14.077653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:14.077704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:14.113683   68713 cri.go:89] found id: ""
	I0815 18:38:14.113711   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.113721   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:14.113730   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:14.113790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:14.149581   68713 cri.go:89] found id: ""
	I0815 18:38:14.149608   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.149616   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:14.149622   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:14.149678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:14.191576   68713 cri.go:89] found id: ""
	I0815 18:38:14.191606   68713 logs.go:276] 0 containers: []
	W0815 18:38:14.191614   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:14.191622   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:14.191635   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:14.243253   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:14.243287   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:14.256818   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:14.256845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:14.382914   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:14.382933   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:14.382948   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:14.461826   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:14.461859   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.005615   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:17.020977   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:17.021042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:17.070191   68713 cri.go:89] found id: ""
	I0815 18:38:17.070220   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.070232   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:17.070239   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:17.070301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:17.118582   68713 cri.go:89] found id: ""
	I0815 18:38:17.118612   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.118624   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:17.118631   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:17.118693   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:17.165380   68713 cri.go:89] found id: ""
	I0815 18:38:17.165404   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.165413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:17.165421   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:17.165483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:17.204630   68713 cri.go:89] found id: ""
	I0815 18:38:17.204660   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.204670   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:17.204678   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:17.204740   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:17.239182   68713 cri.go:89] found id: ""
	I0815 18:38:17.239210   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.239219   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:17.239226   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:17.239285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:17.276329   68713 cri.go:89] found id: ""
	I0815 18:38:17.276356   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.276367   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:17.276375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:17.276472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:17.312387   68713 cri.go:89] found id: ""
	I0815 18:38:17.312418   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.312427   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:17.312433   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:17.312485   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:17.348277   68713 cri.go:89] found id: ""
	I0815 18:38:17.348300   68713 logs.go:276] 0 containers: []
	W0815 18:38:17.348308   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:17.348315   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:17.348334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:17.424886   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:17.424924   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:17.465491   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:17.465518   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:17.517687   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:17.517719   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:17.531928   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:17.531959   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:17.606987   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:14.849001   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:17.349912   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:16.155496   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:18.653621   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:18.159850   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.658717   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.107740   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:20.123194   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:20.123255   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:20.163586   68713 cri.go:89] found id: ""
	I0815 18:38:20.163608   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.163619   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:20.163627   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:20.163676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:20.200171   68713 cri.go:89] found id: ""
	I0815 18:38:20.200196   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.200204   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:20.200210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:20.200270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:20.234739   68713 cri.go:89] found id: ""
	I0815 18:38:20.234770   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.234781   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:20.234788   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:20.234849   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:20.270182   68713 cri.go:89] found id: ""
	I0815 18:38:20.270206   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.270215   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:20.270220   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:20.270281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:20.303643   68713 cri.go:89] found id: ""
	I0815 18:38:20.303672   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.303682   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:20.303690   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:20.303748   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:20.339399   68713 cri.go:89] found id: ""
	I0815 18:38:20.339431   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.339441   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:20.339449   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:20.339511   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:20.377220   68713 cri.go:89] found id: ""
	I0815 18:38:20.377245   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.377252   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:20.377258   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:20.377310   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:20.411202   68713 cri.go:89] found id: ""
	I0815 18:38:20.411238   68713 logs.go:276] 0 containers: []
	W0815 18:38:20.411249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:20.411268   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:20.411282   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:20.462846   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:20.462879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:20.476569   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:20.476597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:20.554243   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:20.554269   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:20.554285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:20.637450   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:20.637493   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:19.849194   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:21.849502   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:20.655378   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.154633   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.160747   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:25.658706   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:23.182633   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:23.196953   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:23.197026   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:23.232011   68713 cri.go:89] found id: ""
	I0815 18:38:23.232039   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.232051   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:23.232064   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:23.232114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:23.266963   68713 cri.go:89] found id: ""
	I0815 18:38:23.266992   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.267000   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:23.267006   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:23.267069   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:23.306473   68713 cri.go:89] found id: ""
	I0815 18:38:23.306496   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.306504   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:23.306510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:23.306574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:23.343542   68713 cri.go:89] found id: ""
	I0815 18:38:23.343574   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.343585   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:23.343592   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:23.343652   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:23.382468   68713 cri.go:89] found id: ""
	I0815 18:38:23.382527   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.382539   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:23.382547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:23.382612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:23.418857   68713 cri.go:89] found id: ""
	I0815 18:38:23.418882   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.418891   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:23.418897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:23.418956   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:23.460971   68713 cri.go:89] found id: ""
	I0815 18:38:23.461004   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.461016   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:23.461023   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:23.461100   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:23.494139   68713 cri.go:89] found id: ""
	I0815 18:38:23.494172   68713 logs.go:276] 0 containers: []
	W0815 18:38:23.494183   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:23.494194   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:23.494208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:23.547874   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:23.547908   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:23.562251   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:23.562278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:23.636503   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:23.636528   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:23.636545   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:23.716020   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:23.716051   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.255081   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:26.270118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:26.270184   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:26.308586   68713 cri.go:89] found id: ""
	I0815 18:38:26.308612   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.308623   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:26.308630   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:26.308688   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:26.344364   68713 cri.go:89] found id: ""
	I0815 18:38:26.344394   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.344410   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:26.344418   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:26.344533   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:26.381621   68713 cri.go:89] found id: ""
	I0815 18:38:26.381642   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.381650   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:26.381655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:26.381699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:26.416091   68713 cri.go:89] found id: ""
	I0815 18:38:26.416118   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.416128   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:26.416136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:26.416195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:26.456038   68713 cri.go:89] found id: ""
	I0815 18:38:26.456068   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.456080   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:26.456088   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:26.456151   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:26.490728   68713 cri.go:89] found id: ""
	I0815 18:38:26.490758   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.490769   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:26.490776   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:26.490837   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:26.529388   68713 cri.go:89] found id: ""
	I0815 18:38:26.529422   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.529434   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:26.529440   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:26.529489   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:26.567452   68713 cri.go:89] found id: ""
	I0815 18:38:26.567475   68713 logs.go:276] 0 containers: []
	W0815 18:38:26.567484   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:26.567491   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:26.567503   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:26.641841   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:26.641863   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:26.641879   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:26.719403   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:26.719438   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:26.760460   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:26.760507   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:26.814450   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:26.814480   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:24.349319   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:26.850207   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:25.155213   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:27.654265   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:29.656816   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:27.663849   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:30.158417   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:29.329451   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:29.344634   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:29.344706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:29.379278   68713 cri.go:89] found id: ""
	I0815 18:38:29.379308   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.379319   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:29.379326   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:29.379385   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:29.411854   68713 cri.go:89] found id: ""
	I0815 18:38:29.411881   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.411891   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:29.411898   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:29.411965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:29.443975   68713 cri.go:89] found id: ""
	I0815 18:38:29.444004   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.444014   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:29.444022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:29.444081   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:29.477919   68713 cri.go:89] found id: ""
	I0815 18:38:29.477944   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.477954   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:29.477962   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:29.478020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:29.518944   68713 cri.go:89] found id: ""
	I0815 18:38:29.518973   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.518985   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:29.518992   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:29.519052   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:29.553876   68713 cri.go:89] found id: ""
	I0815 18:38:29.553903   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.553913   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:29.553921   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:29.553974   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:29.590768   68713 cri.go:89] found id: ""
	I0815 18:38:29.590804   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.590815   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:29.590823   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:29.590879   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:29.625553   68713 cri.go:89] found id: ""
	I0815 18:38:29.625578   68713 logs.go:276] 0 containers: []
	W0815 18:38:29.625586   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:29.625595   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:29.625606   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:29.668447   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:29.668478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:29.721002   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:29.721035   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:29.734955   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:29.734983   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:29.808703   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:29.808726   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:29.808742   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.397781   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:32.413876   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:32.413937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:32.453689   68713 cri.go:89] found id: ""
	I0815 18:38:32.453720   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.453777   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:32.453791   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:32.453839   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:32.490529   68713 cri.go:89] found id: ""
	I0815 18:38:32.490559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.490567   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:32.490573   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:32.490622   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:32.527680   68713 cri.go:89] found id: ""
	I0815 18:38:32.527710   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.527720   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:32.527727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:32.527790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:32.564619   68713 cri.go:89] found id: ""
	I0815 18:38:32.564656   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.564667   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:32.564677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:32.564745   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:32.600530   68713 cri.go:89] found id: ""
	I0815 18:38:32.600559   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.600570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:32.600577   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:32.600639   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:32.636779   68713 cri.go:89] found id: ""
	I0815 18:38:32.636813   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.636821   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:32.636828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:32.636897   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:32.673743   68713 cri.go:89] found id: ""
	I0815 18:38:32.673774   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.673786   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:32.673794   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:32.673853   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:32.709678   68713 cri.go:89] found id: ""
	I0815 18:38:32.709708   68713 logs.go:276] 0 containers: []
	W0815 18:38:32.709719   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:32.709730   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:32.709744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:32.785961   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:32.785998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:29.349763   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:31.350398   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:33.848873   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.155992   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:34.654825   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.159855   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:34.657783   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:32.828205   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:32.828237   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:32.894624   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:32.894666   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:32.910742   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:32.910769   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:32.980853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:35.481438   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:35.495373   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:35.495444   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:35.529184   68713 cri.go:89] found id: ""
	I0815 18:38:35.529212   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.529221   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:35.529226   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:35.529275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:35.565188   68713 cri.go:89] found id: ""
	I0815 18:38:35.565214   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.565221   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:35.565227   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:35.565281   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:35.600386   68713 cri.go:89] found id: ""
	I0815 18:38:35.600416   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.600428   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:35.600435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:35.600519   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:35.634255   68713 cri.go:89] found id: ""
	I0815 18:38:35.634278   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.634287   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:35.634293   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:35.634339   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:35.670236   68713 cri.go:89] found id: ""
	I0815 18:38:35.670260   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.670268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:35.670273   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:35.670354   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:35.707691   68713 cri.go:89] found id: ""
	I0815 18:38:35.707714   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.707722   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:35.707727   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:35.707782   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:35.745791   68713 cri.go:89] found id: ""
	I0815 18:38:35.745820   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.745832   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:35.745844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:35.745916   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:35.784167   68713 cri.go:89] found id: ""
	I0815 18:38:35.784195   68713 logs.go:276] 0 containers: []
	W0815 18:38:35.784205   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:35.784217   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:35.784234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:35.864681   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:35.864711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:35.906831   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:35.906858   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:35.960328   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:35.960366   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:35.974401   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:35.974428   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:36.044789   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:35.849744   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:38.348058   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:36.654916   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:39.155585   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:36.658767   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:39.159236   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:38.545951   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:38.561473   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:38.561540   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:38.597621   68713 cri.go:89] found id: ""
	I0815 18:38:38.597658   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.597668   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:38.597679   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:38.597756   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:38.632081   68713 cri.go:89] found id: ""
	I0815 18:38:38.632115   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.632127   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:38.632135   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:38.632203   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:38.669917   68713 cri.go:89] found id: ""
	I0815 18:38:38.669944   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.669952   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:38.669958   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:38.670015   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:38.707552   68713 cri.go:89] found id: ""
	I0815 18:38:38.707574   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.707582   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:38.707588   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:38.707642   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:38.746054   68713 cri.go:89] found id: ""
	I0815 18:38:38.746082   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.746093   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:38.746101   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:38.746166   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:38.783901   68713 cri.go:89] found id: ""
	I0815 18:38:38.783933   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.783945   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:38.783952   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:38.784018   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:38.825411   68713 cri.go:89] found id: ""
	I0815 18:38:38.825441   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.825452   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:38.825459   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:38.825520   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:38.863174   68713 cri.go:89] found id: ""
	I0815 18:38:38.863219   68713 logs.go:276] 0 containers: []
	W0815 18:38:38.863231   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:38.863241   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:38.863254   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:38.914016   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:38.914045   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:38.927634   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:38.927659   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:38.993380   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:38.993407   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:38.993422   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:39.077075   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:39.077116   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:41.620219   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:41.633572   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:41.633628   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:41.670330   68713 cri.go:89] found id: ""
	I0815 18:38:41.670351   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.670358   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:41.670364   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:41.670418   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:41.706467   68713 cri.go:89] found id: ""
	I0815 18:38:41.706494   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.706502   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:41.706508   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:41.706564   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:41.742915   68713 cri.go:89] found id: ""
	I0815 18:38:41.742958   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.742970   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:41.742978   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:41.743044   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:41.778650   68713 cri.go:89] found id: ""
	I0815 18:38:41.778679   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.778687   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:41.778692   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:41.778739   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:41.813329   68713 cri.go:89] found id: ""
	I0815 18:38:41.813358   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.813369   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:41.813375   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:41.813427   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:41.851351   68713 cri.go:89] found id: ""
	I0815 18:38:41.851383   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.851391   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:41.851398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:41.851460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:41.895097   68713 cri.go:89] found id: ""
	I0815 18:38:41.895130   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.895142   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:41.895150   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:41.895209   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:41.931306   68713 cri.go:89] found id: ""
	I0815 18:38:41.931336   68713 logs.go:276] 0 containers: []
	W0815 18:38:41.931353   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:41.931365   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:41.931381   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:41.944796   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:41.944828   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:42.018868   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:42.018891   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:42.018903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:42.104304   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:42.104340   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:42.143625   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:42.143655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:40.349197   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:42.850034   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:41.655478   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:44.155025   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:41.159976   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:43.658013   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:45.658358   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:44.698568   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:44.712171   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:44.712247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:44.747043   68713 cri.go:89] found id: ""
	I0815 18:38:44.747071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.747079   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:44.747085   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:44.747143   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:44.782660   68713 cri.go:89] found id: ""
	I0815 18:38:44.782691   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.782703   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:44.782711   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:44.782765   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:44.821111   68713 cri.go:89] found id: ""
	I0815 18:38:44.821138   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.821146   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:44.821152   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:44.821222   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:44.859602   68713 cri.go:89] found id: ""
	I0815 18:38:44.859635   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.859647   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:44.859655   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:44.859717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:44.895037   68713 cri.go:89] found id: ""
	I0815 18:38:44.895071   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.895083   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:44.895090   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:44.895175   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:44.928729   68713 cri.go:89] found id: ""
	I0815 18:38:44.928759   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.928771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:44.928781   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:44.928844   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:44.963945   68713 cri.go:89] found id: ""
	I0815 18:38:44.963977   68713 logs.go:276] 0 containers: []
	W0815 18:38:44.963987   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:44.963996   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:44.964060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:45.001166   68713 cri.go:89] found id: ""
	I0815 18:38:45.001195   68713 logs.go:276] 0 containers: []
	W0815 18:38:45.001206   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:45.001218   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:45.001234   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:45.015181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:45.015209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:45.084297   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:45.084322   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:45.084334   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:45.173833   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:45.173866   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:45.211863   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:45.211899   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:47.771009   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:47.784865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:47.784926   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:44.850332   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.347985   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:46.654797   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:48.654936   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.658823   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:50.178115   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:47.818497   68713 cri.go:89] found id: ""
	I0815 18:38:47.818526   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.818538   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:47.818545   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:47.818608   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:47.857900   68713 cri.go:89] found id: ""
	I0815 18:38:47.857927   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.857935   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:47.857941   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:47.857997   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:47.895778   68713 cri.go:89] found id: ""
	I0815 18:38:47.895809   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.895822   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:47.895829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:47.895887   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:47.937410   68713 cri.go:89] found id: ""
	I0815 18:38:47.937434   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.937442   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:47.937448   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:47.937505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:47.976414   68713 cri.go:89] found id: ""
	I0815 18:38:47.976442   68713 logs.go:276] 0 containers: []
	W0815 18:38:47.976450   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:47.976455   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:47.976525   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:48.014863   68713 cri.go:89] found id: ""
	I0815 18:38:48.014891   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.014899   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:48.014906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:48.014969   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:48.053508   68713 cri.go:89] found id: ""
	I0815 18:38:48.053536   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.053546   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:48.053554   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:48.053624   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:48.088900   68713 cri.go:89] found id: ""
	I0815 18:38:48.088931   68713 logs.go:276] 0 containers: []
	W0815 18:38:48.088943   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:48.088954   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:48.088969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:48.140415   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:48.140447   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:48.155958   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:48.155985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:48.229338   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:48.229368   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:48.229383   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:48.317776   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:48.317814   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:50.860592   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:50.877070   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:50.877154   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:50.937590   68713 cri.go:89] found id: ""
	I0815 18:38:50.937615   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.937622   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:50.937628   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:50.937687   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:50.972573   68713 cri.go:89] found id: ""
	I0815 18:38:50.972603   68713 logs.go:276] 0 containers: []
	W0815 18:38:50.972614   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:50.972622   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:50.972683   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:51.008786   68713 cri.go:89] found id: ""
	I0815 18:38:51.008811   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.008820   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:51.008826   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:51.008875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:51.043076   68713 cri.go:89] found id: ""
	I0815 18:38:51.043105   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.043116   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:51.043123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:51.043186   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:51.078344   68713 cri.go:89] found id: ""
	I0815 18:38:51.078379   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.078391   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:51.078398   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:51.078453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:51.114494   68713 cri.go:89] found id: ""
	I0815 18:38:51.114521   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.114532   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:51.114540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:51.114600   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:51.153871   68713 cri.go:89] found id: ""
	I0815 18:38:51.153898   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.153909   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:51.153917   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:51.153984   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:51.187908   68713 cri.go:89] found id: ""
	I0815 18:38:51.187937   68713 logs.go:276] 0 containers: []
	W0815 18:38:51.187948   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:51.187959   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:51.187974   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:51.264172   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:51.264198   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:51.264214   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:51.345238   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:51.345285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:51.385720   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:51.385745   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:51.443313   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:51.443353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:49.849156   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:52.348628   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:51.154188   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:53.155256   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:52.658773   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:54.659127   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:53.959176   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:53.972031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:53.972101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:54.010673   68713 cri.go:89] found id: ""
	I0815 18:38:54.010699   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.010707   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:54.010714   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:54.010775   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:54.045632   68713 cri.go:89] found id: ""
	I0815 18:38:54.045662   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.045672   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:54.045678   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:54.045727   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:54.082111   68713 cri.go:89] found id: ""
	I0815 18:38:54.082134   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.082142   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:54.082148   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:54.082206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:54.118210   68713 cri.go:89] found id: ""
	I0815 18:38:54.118232   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.118239   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:54.118246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:54.118305   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:54.155474   68713 cri.go:89] found id: ""
	I0815 18:38:54.155499   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.155508   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:54.155515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:54.155572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:54.193263   68713 cri.go:89] found id: ""
	I0815 18:38:54.193298   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.193305   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:54.193311   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:54.193365   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:54.233389   68713 cri.go:89] found id: ""
	I0815 18:38:54.233416   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.233428   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:54.233435   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:54.233502   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:54.266127   68713 cri.go:89] found id: ""
	I0815 18:38:54.266155   68713 logs.go:276] 0 containers: []
	W0815 18:38:54.266164   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:54.266176   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:54.266199   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:54.318724   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:54.318762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:54.332993   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:54.333022   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:54.405895   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:54.405915   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:54.405926   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:54.485819   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:54.485875   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.024956   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:38:57.038182   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:38:57.038246   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:38:57.078020   68713 cri.go:89] found id: ""
	I0815 18:38:57.078044   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.078055   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:38:57.078063   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:38:57.078127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:38:57.115077   68713 cri.go:89] found id: ""
	I0815 18:38:57.115101   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.115110   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:38:57.115118   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:38:57.115179   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:38:57.152711   68713 cri.go:89] found id: ""
	I0815 18:38:57.152737   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.152747   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:38:57.152755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:38:57.152819   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:38:57.190000   68713 cri.go:89] found id: ""
	I0815 18:38:57.190034   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.190042   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:38:57.190048   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:38:57.190096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:38:57.224947   68713 cri.go:89] found id: ""
	I0815 18:38:57.224978   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.224990   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:38:57.224998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:38:57.225060   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:38:57.262329   68713 cri.go:89] found id: ""
	I0815 18:38:57.262365   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.262375   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:38:57.262383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:38:57.262458   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:38:57.299471   68713 cri.go:89] found id: ""
	I0815 18:38:57.299498   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.299507   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:38:57.299513   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:38:57.299572   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:38:57.357163   68713 cri.go:89] found id: ""
	I0815 18:38:57.357202   68713 logs.go:276] 0 containers: []
	W0815 18:38:57.357211   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:38:57.357220   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:38:57.357236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:38:57.405154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:38:57.405184   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:38:57.459245   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:38:57.459277   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:38:57.473663   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:38:57.473699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:38:57.546670   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:38:57.546699   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:38:57.546715   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:38:54.348864   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:56.848276   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:58.849461   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:55.655015   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:58.158306   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:56.662722   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:38:59.159559   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:00.124455   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:00.137985   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:00.138053   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:00.175201   68713 cri.go:89] found id: ""
	I0815 18:39:00.175231   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.175242   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:00.175250   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:00.175328   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:00.209376   68713 cri.go:89] found id: ""
	I0815 18:39:00.209406   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.209418   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:00.209426   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:00.209484   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:00.246860   68713 cri.go:89] found id: ""
	I0815 18:39:00.246889   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.246899   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:00.246906   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:00.246965   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:00.282787   68713 cri.go:89] found id: ""
	I0815 18:39:00.282814   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.282823   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:00.282829   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:00.282875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:00.330227   68713 cri.go:89] found id: ""
	I0815 18:39:00.330256   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.330268   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:00.330275   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:00.330338   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:00.363028   68713 cri.go:89] found id: ""
	I0815 18:39:00.363061   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.363072   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:00.363079   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:00.363169   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:00.400484   68713 cri.go:89] found id: ""
	I0815 18:39:00.400522   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.400533   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:00.400540   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:00.400597   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:00.436187   68713 cri.go:89] found id: ""
	I0815 18:39:00.436225   68713 logs.go:276] 0 containers: []
	W0815 18:39:00.436238   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:00.436252   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:00.436267   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:00.481960   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:00.481985   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:00.535103   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:00.535138   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:00.548541   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:00.548568   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:00.619476   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:00.619505   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:00.619525   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:01.347916   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.349448   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:00.654384   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.155048   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:01.658374   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.658824   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:03.206473   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:03.222893   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:03.222967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:03.272249   68713 cri.go:89] found id: ""
	I0815 18:39:03.272275   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.272283   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:03.272291   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:03.272355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:03.336786   68713 cri.go:89] found id: ""
	I0815 18:39:03.336811   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.336819   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:03.336825   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:03.336884   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:03.375977   68713 cri.go:89] found id: ""
	I0815 18:39:03.376002   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.376011   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:03.376016   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:03.376063   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:03.410304   68713 cri.go:89] found id: ""
	I0815 18:39:03.410326   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.410335   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:03.410340   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:03.410403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:03.446147   68713 cri.go:89] found id: ""
	I0815 18:39:03.446176   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.446188   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:03.446195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:03.446256   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:03.482555   68713 cri.go:89] found id: ""
	I0815 18:39:03.482582   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.482591   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:03.482597   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:03.482654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:03.519477   68713 cri.go:89] found id: ""
	I0815 18:39:03.519503   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.519511   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:03.519517   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:03.519574   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:03.556539   68713 cri.go:89] found id: ""
	I0815 18:39:03.556566   68713 logs.go:276] 0 containers: []
	W0815 18:39:03.556577   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:03.556587   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:03.556602   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:03.610553   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:03.610593   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:03.625799   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:03.625827   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:03.697106   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:03.697132   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:03.697149   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:03.779089   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:03.779120   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:06.319280   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:06.333284   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:06.333355   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:06.369696   68713 cri.go:89] found id: ""
	I0815 18:39:06.369719   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.369727   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:06.369732   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:06.369780   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:06.405023   68713 cri.go:89] found id: ""
	I0815 18:39:06.405046   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.405053   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:06.405059   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:06.405113   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:06.439948   68713 cri.go:89] found id: ""
	I0815 18:39:06.439974   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.439983   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:06.439989   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:06.440048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:06.475613   68713 cri.go:89] found id: ""
	I0815 18:39:06.475642   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.475654   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:06.475664   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:06.475723   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:06.510698   68713 cri.go:89] found id: ""
	I0815 18:39:06.510721   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.510729   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:06.510735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:06.510783   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:06.545831   68713 cri.go:89] found id: ""
	I0815 18:39:06.545861   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.545873   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:06.545880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:06.545940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:06.579027   68713 cri.go:89] found id: ""
	I0815 18:39:06.579053   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.579064   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:06.579072   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:06.579132   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:06.615308   68713 cri.go:89] found id: ""
	I0815 18:39:06.615339   68713 logs.go:276] 0 containers: []
	W0815 18:39:06.615352   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:06.615371   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:06.615396   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:06.671523   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:06.671555   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:06.685556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:06.685580   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:06.765036   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:06.765059   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:06.765071   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:06.843412   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:06.843457   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:05.849018   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:07.849342   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:05.654854   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:07.654910   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:09.655240   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:06.158409   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:08.657902   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:10.658258   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:09.390799   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:09.404099   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:09.404160   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:09.439534   68713 cri.go:89] found id: ""
	I0815 18:39:09.439563   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.439582   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:09.439591   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:09.439654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:09.478933   68713 cri.go:89] found id: ""
	I0815 18:39:09.478963   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.478974   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:09.478982   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:09.479042   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:09.514396   68713 cri.go:89] found id: ""
	I0815 18:39:09.514425   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.514436   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:09.514444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:09.514510   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:09.547749   68713 cri.go:89] found id: ""
	I0815 18:39:09.547775   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.547785   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:09.547793   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:09.547856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:09.583583   68713 cri.go:89] found id: ""
	I0815 18:39:09.583611   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.583623   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:09.583631   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:09.583695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:09.616530   68713 cri.go:89] found id: ""
	I0815 18:39:09.616560   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.616570   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:09.616576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:09.616641   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:09.655167   68713 cri.go:89] found id: ""
	I0815 18:39:09.655189   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.655199   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:09.655207   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:09.655263   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:09.691368   68713 cri.go:89] found id: ""
	I0815 18:39:09.691391   68713 logs.go:276] 0 containers: []
	W0815 18:39:09.691398   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:09.691411   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:09.691426   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:09.740739   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:09.740770   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:09.755049   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:09.755074   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:09.825053   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:09.825080   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:09.825095   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:09.903036   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:09.903076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:12.441898   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:12.454637   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:12.454712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:12.494604   68713 cri.go:89] found id: ""
	I0815 18:39:12.494632   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.494640   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:12.494646   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:12.494699   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:12.531587   68713 cri.go:89] found id: ""
	I0815 18:39:12.531631   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.531642   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:12.531649   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:12.531710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:12.564991   68713 cri.go:89] found id: ""
	I0815 18:39:12.565014   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.565021   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:12.565027   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:12.565096   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:12.600667   68713 cri.go:89] found id: ""
	I0815 18:39:12.600698   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.600709   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:12.600715   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:12.600777   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:12.633658   68713 cri.go:89] found id: ""
	I0815 18:39:12.633681   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.633691   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:12.633698   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:12.633759   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:12.673709   68713 cri.go:89] found id: ""
	I0815 18:39:12.673730   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.673737   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:12.673743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:12.673790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:12.707353   68713 cri.go:89] found id: ""
	I0815 18:39:12.707378   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.707385   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:12.707390   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:12.707437   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:12.746926   68713 cri.go:89] found id: ""
	I0815 18:39:12.746949   68713 logs.go:276] 0 containers: []
	W0815 18:39:12.746957   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:12.746965   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:12.746977   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:09.853116   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:12.348297   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:11.655347   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:14.154929   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:13.158257   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:15.158457   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:12.792154   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:12.792180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:12.843933   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:12.843968   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:12.859583   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:12.859609   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:12.940856   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:12.940880   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:12.940895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:15.520265   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:15.533677   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:15.533754   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:15.572109   68713 cri.go:89] found id: ""
	I0815 18:39:15.572135   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.572145   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:15.572153   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:15.572221   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:15.607442   68713 cri.go:89] found id: ""
	I0815 18:39:15.607472   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.607484   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:15.607492   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:15.607551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:15.642206   68713 cri.go:89] found id: ""
	I0815 18:39:15.642230   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.642238   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:15.642246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:15.642308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:15.677914   68713 cri.go:89] found id: ""
	I0815 18:39:15.677945   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.677956   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:15.677963   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:15.678049   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:15.714466   68713 cri.go:89] found id: ""
	I0815 18:39:15.714496   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.714504   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:15.714510   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:15.714563   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:15.750961   68713 cri.go:89] found id: ""
	I0815 18:39:15.750987   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.750995   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:15.751002   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:15.751050   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:15.785399   68713 cri.go:89] found id: ""
	I0815 18:39:15.785434   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.785444   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:15.785450   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:15.785498   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:15.821547   68713 cri.go:89] found id: ""
	I0815 18:39:15.821571   68713 logs.go:276] 0 containers: []
	W0815 18:39:15.821578   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:15.821586   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:15.821597   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:15.875299   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:15.875329   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:15.890376   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:15.890408   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:15.957317   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:15.957337   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:15.957352   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:16.033952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:16.033997   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:14.349171   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:16.849292   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.850822   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:16.654572   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.656041   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:17.657984   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:19.658366   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:18.571953   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:18.584652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:18.584721   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:18.617043   68713 cri.go:89] found id: ""
	I0815 18:39:18.617066   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.617073   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:18.617079   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:18.617127   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:18.651080   68713 cri.go:89] found id: ""
	I0815 18:39:18.651112   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.651123   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:18.651130   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:18.651187   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:18.686857   68713 cri.go:89] found id: ""
	I0815 18:39:18.686890   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.686901   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:18.686909   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:18.686975   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:18.719397   68713 cri.go:89] found id: ""
	I0815 18:39:18.719434   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.719444   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:18.719452   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:18.719521   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:18.758316   68713 cri.go:89] found id: ""
	I0815 18:39:18.758349   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.758360   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:18.758366   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:18.758435   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:18.791586   68713 cri.go:89] found id: ""
	I0815 18:39:18.791609   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.791617   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:18.791623   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:18.791690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:18.827905   68713 cri.go:89] found id: ""
	I0815 18:39:18.827929   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.827937   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:18.827945   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:18.828004   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:18.869371   68713 cri.go:89] found id: ""
	I0815 18:39:18.869404   68713 logs.go:276] 0 containers: []
	W0815 18:39:18.869412   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:18.869420   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:18.869432   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:18.920124   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:18.920158   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:18.936137   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:18.936168   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:19.006877   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:19.006902   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:19.006913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:19.088909   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:19.088953   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:21.632734   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:21.647246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:21.647322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:21.685574   68713 cri.go:89] found id: ""
	I0815 18:39:21.685606   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.685614   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:21.685620   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:21.685676   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:21.717073   68713 cri.go:89] found id: ""
	I0815 18:39:21.717112   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.717124   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:21.717133   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:21.717205   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:21.752072   68713 cri.go:89] found id: ""
	I0815 18:39:21.752101   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.752112   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:21.752120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:21.752182   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:21.786811   68713 cri.go:89] found id: ""
	I0815 18:39:21.786834   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.786842   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:21.786848   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:21.786893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:21.823694   68713 cri.go:89] found id: ""
	I0815 18:39:21.823719   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.823728   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:21.823733   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:21.823790   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:21.859358   68713 cri.go:89] found id: ""
	I0815 18:39:21.859387   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.859398   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:21.859405   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:21.859469   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:21.893723   68713 cri.go:89] found id: ""
	I0815 18:39:21.893751   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.893761   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:21.893769   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:21.893826   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:21.929338   68713 cri.go:89] found id: ""
	I0815 18:39:21.929368   68713 logs.go:276] 0 containers: []
	W0815 18:39:21.929379   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:21.929388   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:21.929414   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:21.979107   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:21.979141   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:21.993968   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:21.994005   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:22.063359   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:22.063384   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:22.063398   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:22.144303   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:22.144337   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:21.348202   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.349199   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:21.154244   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.155954   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:21.658572   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:23.658782   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:25.658946   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:24.688104   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:24.701230   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:24.701298   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:24.735056   68713 cri.go:89] found id: ""
	I0815 18:39:24.735086   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.735097   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:24.735104   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:24.735172   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:24.769704   68713 cri.go:89] found id: ""
	I0815 18:39:24.769732   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.769743   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:24.769751   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:24.769812   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:24.808763   68713 cri.go:89] found id: ""
	I0815 18:39:24.808788   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.808796   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:24.808807   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:24.808856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:24.846997   68713 cri.go:89] found id: ""
	I0815 18:39:24.847028   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.847038   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:24.847045   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:24.847106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:24.881681   68713 cri.go:89] found id: ""
	I0815 18:39:24.881705   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.881713   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:24.881719   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:24.881772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:24.917000   68713 cri.go:89] found id: ""
	I0815 18:39:24.917024   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.917032   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:24.917040   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:24.917088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:24.951133   68713 cri.go:89] found id: ""
	I0815 18:39:24.951156   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.951164   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:24.951170   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:24.951218   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:24.987306   68713 cri.go:89] found id: ""
	I0815 18:39:24.987331   68713 logs.go:276] 0 containers: []
	W0815 18:39:24.987339   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:24.987347   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:24.987360   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:25.039533   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:25.039566   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:25.053011   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:25.053036   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:25.125864   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:25.125884   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:25.125895   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:25.209885   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:25.209916   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:27.751681   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:27.765316   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:27.765390   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:25.848840   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.849344   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:25.156068   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.654722   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:28.158317   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:30.158632   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:27.805820   68713 cri.go:89] found id: ""
	I0815 18:39:27.805858   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.805870   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:27.805878   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:27.805940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:27.846684   68713 cri.go:89] found id: ""
	I0815 18:39:27.846717   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.846727   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:27.846737   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:27.846801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:27.882326   68713 cri.go:89] found id: ""
	I0815 18:39:27.882358   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.882370   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:27.882378   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:27.882448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:27.917340   68713 cri.go:89] found id: ""
	I0815 18:39:27.917416   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.917431   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:27.917442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:27.917505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:27.952674   68713 cri.go:89] found id: ""
	I0815 18:39:27.952700   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.952708   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:27.952714   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:27.952763   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:27.986103   68713 cri.go:89] found id: ""
	I0815 18:39:27.986132   68713 logs.go:276] 0 containers: []
	W0815 18:39:27.986143   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:27.986151   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:27.986212   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:28.023674   68713 cri.go:89] found id: ""
	I0815 18:39:28.023716   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.023735   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:28.023742   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:28.023807   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:28.064902   68713 cri.go:89] found id: ""
	I0815 18:39:28.064929   68713 logs.go:276] 0 containers: []
	W0815 18:39:28.064937   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:28.064945   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:28.064957   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:28.116145   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:28.116180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:28.130435   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:28.130462   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:28.204899   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:28.204920   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:28.204931   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:28.284165   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:28.284202   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:30.824135   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:30.837515   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:30.837583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:30.874671   68713 cri.go:89] found id: ""
	I0815 18:39:30.874695   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.874705   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:30.874712   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:30.874776   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:30.909990   68713 cri.go:89] found id: ""
	I0815 18:39:30.910027   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.910038   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:30.910045   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:30.910106   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:30.946824   68713 cri.go:89] found id: ""
	I0815 18:39:30.946851   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.946859   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:30.946865   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:30.946912   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:30.983392   68713 cri.go:89] found id: ""
	I0815 18:39:30.983419   68713 logs.go:276] 0 containers: []
	W0815 18:39:30.983429   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:30.983437   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:30.983505   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:31.023471   68713 cri.go:89] found id: ""
	I0815 18:39:31.023500   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.023510   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:31.023518   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:31.023583   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:31.063586   68713 cri.go:89] found id: ""
	I0815 18:39:31.063616   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.063627   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:31.063636   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:31.063696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:31.103147   68713 cri.go:89] found id: ""
	I0815 18:39:31.103173   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.103180   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:31.103186   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:31.103237   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:31.144082   68713 cri.go:89] found id: ""
	I0815 18:39:31.144113   68713 logs.go:276] 0 containers: []
	W0815 18:39:31.144124   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:31.144136   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:31.144150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:31.212535   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:31.212563   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:31.212586   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:31.292039   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:31.292076   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:31.335023   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:31.335050   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:31.388817   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:31.388853   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:30.349110   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.349209   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:30.154683   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.653806   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:34.654716   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:32.658245   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:34.659119   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:33.925861   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:33.939604   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:33.939668   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:33.974538   68713 cri.go:89] found id: ""
	I0815 18:39:33.974563   68713 logs.go:276] 0 containers: []
	W0815 18:39:33.974571   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:33.974577   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:33.974634   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:34.009017   68713 cri.go:89] found id: ""
	I0815 18:39:34.009048   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.009058   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:34.009064   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:34.009120   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:34.049478   68713 cri.go:89] found id: ""
	I0815 18:39:34.049501   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.049517   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:34.049523   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:34.049576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:34.091011   68713 cri.go:89] found id: ""
	I0815 18:39:34.091040   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.091050   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:34.091056   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:34.091114   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:34.126617   68713 cri.go:89] found id: ""
	I0815 18:39:34.126640   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.126650   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:34.126657   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:34.126706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:34.168140   68713 cri.go:89] found id: ""
	I0815 18:39:34.168169   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.168179   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:34.168187   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:34.168279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:34.205052   68713 cri.go:89] found id: ""
	I0815 18:39:34.205081   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.205091   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:34.205098   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:34.205173   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:34.238474   68713 cri.go:89] found id: ""
	I0815 18:39:34.238499   68713 logs.go:276] 0 containers: []
	W0815 18:39:34.238506   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:34.238521   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:34.238540   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:34.280574   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:34.280601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:34.332662   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:34.332704   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:34.348556   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:34.348591   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:34.421428   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:34.421450   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:34.421464   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.004855   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:37.019306   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:37.019378   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:37.057588   68713 cri.go:89] found id: ""
	I0815 18:39:37.057618   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.057626   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:37.057641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:37.057706   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:37.095645   68713 cri.go:89] found id: ""
	I0815 18:39:37.095678   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.095687   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:37.095693   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:37.095750   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:37.131669   68713 cri.go:89] found id: ""
	I0815 18:39:37.131696   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.131711   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:37.131717   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:37.131772   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:37.185065   68713 cri.go:89] found id: ""
	I0815 18:39:37.185097   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.185108   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:37.185115   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:37.185180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:37.220220   68713 cri.go:89] found id: ""
	I0815 18:39:37.220251   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.220262   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:37.220269   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:37.220322   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:37.259816   68713 cri.go:89] found id: ""
	I0815 18:39:37.259849   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.259859   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:37.259868   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:37.259920   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:37.292777   68713 cri.go:89] found id: ""
	I0815 18:39:37.292807   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.292818   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:37.292825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:37.292888   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:37.328673   68713 cri.go:89] found id: ""
	I0815 18:39:37.328703   68713 logs.go:276] 0 containers: []
	W0815 18:39:37.328714   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:37.328725   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:37.328740   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:37.379131   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:37.379172   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:37.392982   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:37.393017   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:37.470727   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:37.470750   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:37.470766   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:37.552353   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:37.552386   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:34.849108   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:37.349765   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:36.655101   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:39.154433   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:37.158746   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:39.658907   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:40.094008   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:40.107681   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:40.107753   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:40.142229   68713 cri.go:89] found id: ""
	I0815 18:39:40.142254   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.142264   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:40.142271   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:40.142333   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:40.180622   68713 cri.go:89] found id: ""
	I0815 18:39:40.180650   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.180665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:40.180672   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:40.180733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:40.219085   68713 cri.go:89] found id: ""
	I0815 18:39:40.219113   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.219120   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:40.219126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:40.219174   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:40.254807   68713 cri.go:89] found id: ""
	I0815 18:39:40.254838   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.254850   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:40.254858   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:40.254940   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:40.290438   68713 cri.go:89] found id: ""
	I0815 18:39:40.290466   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.290478   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:40.290484   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:40.290547   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:40.326320   68713 cri.go:89] found id: ""
	I0815 18:39:40.326356   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.326364   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:40.326370   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:40.326429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:40.361538   68713 cri.go:89] found id: ""
	I0815 18:39:40.361563   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.361570   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:40.361576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:40.361629   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:40.397275   68713 cri.go:89] found id: ""
	I0815 18:39:40.397304   68713 logs.go:276] 0 containers: []
	W0815 18:39:40.397316   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:40.397326   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:40.397342   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:40.466042   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:40.466064   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:40.466078   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:40.544915   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:40.544951   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:40.584992   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:40.585019   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:40.634792   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:40.634837   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:39.848609   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:41.849831   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:41.655153   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:43.655372   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:42.159650   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:44.658547   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:43.149819   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:43.164578   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:43.164649   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:43.199576   68713 cri.go:89] found id: ""
	I0815 18:39:43.199621   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.199633   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:43.199641   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:43.199702   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:43.233783   68713 cri.go:89] found id: ""
	I0815 18:39:43.233820   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.233833   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:43.233840   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:43.233911   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:43.269406   68713 cri.go:89] found id: ""
	I0815 18:39:43.269437   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.269449   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:43.269457   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:43.269570   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:43.310686   68713 cri.go:89] found id: ""
	I0815 18:39:43.310715   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.310726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:43.310734   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:43.310795   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:43.348662   68713 cri.go:89] found id: ""
	I0815 18:39:43.348689   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.348699   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:43.348706   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:43.348767   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:43.385676   68713 cri.go:89] found id: ""
	I0815 18:39:43.385714   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.385726   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:43.385737   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:43.385802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:43.422605   68713 cri.go:89] found id: ""
	I0815 18:39:43.422634   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.422645   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:43.422653   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:43.422712   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:43.463208   68713 cri.go:89] found id: ""
	I0815 18:39:43.463238   68713 logs.go:276] 0 containers: []
	W0815 18:39:43.463249   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:43.463260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:43.463278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:43.476637   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:43.476664   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:43.552239   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:43.552263   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:43.552278   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:43.653055   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:43.653108   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:43.699166   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:43.699192   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.251725   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:46.265164   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:46.265240   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:46.305095   68713 cri.go:89] found id: ""
	I0815 18:39:46.305123   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.305133   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:46.305140   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:46.305196   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:46.349744   68713 cri.go:89] found id: ""
	I0815 18:39:46.349773   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.349783   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:46.349790   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:46.349858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:46.385807   68713 cri.go:89] found id: ""
	I0815 18:39:46.385839   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.385847   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:46.385853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:46.385908   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:46.419977   68713 cri.go:89] found id: ""
	I0815 18:39:46.420011   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.420024   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:46.420031   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:46.420093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:46.454852   68713 cri.go:89] found id: ""
	I0815 18:39:46.454883   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.454894   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:46.454901   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:46.454962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:46.497157   68713 cri.go:89] found id: ""
	I0815 18:39:46.497192   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.497202   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:46.497210   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:46.497271   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:46.530191   68713 cri.go:89] found id: ""
	I0815 18:39:46.530218   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.530226   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:46.530232   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:46.530282   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:46.566024   68713 cri.go:89] found id: ""
	I0815 18:39:46.566050   68713 logs.go:276] 0 containers: []
	W0815 18:39:46.566063   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:46.566074   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:46.566103   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:46.621969   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:46.622004   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:46.636576   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:46.636603   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:46.706819   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:46.706842   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:46.706857   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:46.786589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:46.786634   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:44.352685   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:46.849090   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:48.849424   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:45.655900   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:48.154862   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:46.658710   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:49.157317   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:49.324853   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:49.343543   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:49.343618   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:49.396260   68713 cri.go:89] found id: ""
	I0815 18:39:49.396292   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.396303   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:49.396311   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:49.396380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:49.437579   68713 cri.go:89] found id: ""
	I0815 18:39:49.437604   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.437612   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:49.437617   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:49.437663   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:49.476206   68713 cri.go:89] found id: ""
	I0815 18:39:49.476232   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.476239   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:49.476245   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:49.476296   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:49.511324   68713 cri.go:89] found id: ""
	I0815 18:39:49.511349   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.511357   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:49.511363   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:49.511428   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:49.545875   68713 cri.go:89] found id: ""
	I0815 18:39:49.545907   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.545916   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:49.545922   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:49.545981   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:49.582176   68713 cri.go:89] found id: ""
	I0815 18:39:49.582204   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.582228   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:49.582246   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:49.582309   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:49.623288   68713 cri.go:89] found id: ""
	I0815 18:39:49.623318   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.623327   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:49.623333   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:49.623394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:49.662352   68713 cri.go:89] found id: ""
	I0815 18:39:49.662377   68713 logs.go:276] 0 containers: []
	W0815 18:39:49.662389   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:49.662399   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:49.662424   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:49.745582   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:49.745617   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:49.785256   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:49.785295   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:49.835944   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:49.835979   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:49.852859   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:49.852886   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:49.928427   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.429223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:52.442384   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:52.442460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:52.480515   68713 cri.go:89] found id: ""
	I0815 18:39:52.480543   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.480553   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:52.480558   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:52.480605   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:52.518346   68713 cri.go:89] found id: ""
	I0815 18:39:52.518382   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.518393   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:52.518401   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:52.518460   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:52.557696   68713 cri.go:89] found id: ""
	I0815 18:39:52.557722   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.557731   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:52.557736   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:52.557786   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:52.590849   68713 cri.go:89] found id: ""
	I0815 18:39:52.590879   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.590890   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:52.590898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:52.590961   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:52.629950   68713 cri.go:89] found id: ""
	I0815 18:39:52.629980   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.629992   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:52.629999   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:52.630047   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:52.666039   68713 cri.go:89] found id: ""
	I0815 18:39:52.666070   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.666081   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:52.666089   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:52.666146   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:52.699917   68713 cri.go:89] found id: ""
	I0815 18:39:52.699941   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.699949   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:52.699955   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:52.700001   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:52.735944   68713 cri.go:89] found id: ""
	I0815 18:39:52.735973   68713 logs.go:276] 0 containers: []
	W0815 18:39:52.735981   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:52.735989   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:52.736001   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:39:50.849633   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:52.850298   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:50.155118   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:52.155166   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:54.653844   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:51.159401   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:53.658513   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	W0815 18:39:52.805519   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:52.805537   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:52.805559   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:52.894175   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:52.894213   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:52.932974   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:52.933006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:52.984206   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:52.984244   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.498477   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:55.511319   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:55.511380   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:55.544899   68713 cri.go:89] found id: ""
	I0815 18:39:55.544928   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.544936   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:55.544943   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:55.545003   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:55.578821   68713 cri.go:89] found id: ""
	I0815 18:39:55.578855   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.578864   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:55.578869   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:55.578922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:55.615392   68713 cri.go:89] found id: ""
	I0815 18:39:55.615422   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.615434   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:55.615441   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:55.615501   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:55.653456   68713 cri.go:89] found id: ""
	I0815 18:39:55.653482   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.653493   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:55.653500   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:55.653558   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:55.687716   68713 cri.go:89] found id: ""
	I0815 18:39:55.687741   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.687749   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:55.687755   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:55.687802   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:55.725518   68713 cri.go:89] found id: ""
	I0815 18:39:55.725543   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.725553   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:55.725561   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:55.725631   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:55.758451   68713 cri.go:89] found id: ""
	I0815 18:39:55.758479   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.758490   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:55.758498   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:55.758560   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:55.792653   68713 cri.go:89] found id: ""
	I0815 18:39:55.792680   68713 logs.go:276] 0 containers: []
	W0815 18:39:55.792687   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:55.792699   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:55.792710   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:39:55.832127   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:55.832156   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:55.885255   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:55.885289   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:55.898980   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:55.899009   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:55.967579   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:55.967609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:55.967624   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:55.348998   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:57.349656   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:56.654840   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.655471   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:56.158348   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.658194   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:00.658852   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:39:58.543524   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:39:58.556338   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:39:58.556412   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:39:58.593359   68713 cri.go:89] found id: ""
	I0815 18:39:58.593390   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.593401   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:39:58.593409   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:39:58.593472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:39:58.628446   68713 cri.go:89] found id: ""
	I0815 18:39:58.628471   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.628481   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:39:58.628504   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:39:58.628567   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:39:58.663930   68713 cri.go:89] found id: ""
	I0815 18:39:58.663954   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.663964   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:39:58.663971   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:39:58.664028   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:39:58.701070   68713 cri.go:89] found id: ""
	I0815 18:39:58.701095   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.701103   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:39:58.701108   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:39:58.701156   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:39:58.734427   68713 cri.go:89] found id: ""
	I0815 18:39:58.734457   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.734468   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:39:58.734476   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:39:58.734543   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:39:58.769121   68713 cri.go:89] found id: ""
	I0815 18:39:58.769144   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.769152   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:39:58.769162   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:39:58.769215   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:39:58.805771   68713 cri.go:89] found id: ""
	I0815 18:39:58.805796   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.805803   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:39:58.805808   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:39:58.805856   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:39:58.840288   68713 cri.go:89] found id: ""
	I0815 18:39:58.840315   68713 logs.go:276] 0 containers: []
	W0815 18:39:58.840325   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:39:58.840336   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:39:58.840351   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:39:58.895856   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:39:58.895893   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:39:58.909453   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:39:58.909478   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:39:58.975939   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:39:58.975960   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:39:58.975971   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:59.055318   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:39:59.055353   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.595588   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:01.608625   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:01.608690   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:01.646105   68713 cri.go:89] found id: ""
	I0815 18:40:01.646133   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.646144   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:01.646151   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:01.646214   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:01.685162   68713 cri.go:89] found id: ""
	I0815 18:40:01.685192   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.685202   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:01.685210   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:01.685261   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:01.721452   68713 cri.go:89] found id: ""
	I0815 18:40:01.721479   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.721499   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:01.721507   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:01.721576   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:01.762288   68713 cri.go:89] found id: ""
	I0815 18:40:01.762318   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.762331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:01.762339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:01.762429   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:01.800547   68713 cri.go:89] found id: ""
	I0815 18:40:01.800579   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.800590   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:01.800598   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:01.800660   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:01.839182   68713 cri.go:89] found id: ""
	I0815 18:40:01.839214   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.839223   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:01.839229   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:01.839294   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:01.875364   68713 cri.go:89] found id: ""
	I0815 18:40:01.875390   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.875398   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:01.875404   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:01.875452   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:01.910485   68713 cri.go:89] found id: ""
	I0815 18:40:01.910512   68713 logs.go:276] 0 containers: []
	W0815 18:40:01.910521   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:01.910535   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:01.910547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:01.951970   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:01.951998   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:02.005720   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:02.005764   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:02.020941   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:02.020969   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:02.101206   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:02.101224   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:02.101236   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:39:59.850909   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:02.349180   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:00.659366   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:03.153614   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:03.158375   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:05.159868   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:04.687482   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:04.701501   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:04.701562   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:04.739613   68713 cri.go:89] found id: ""
	I0815 18:40:04.739636   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.739644   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:04.739650   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:04.739704   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:04.774419   68713 cri.go:89] found id: ""
	I0815 18:40:04.774443   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.774453   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:04.774460   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:04.774522   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:04.809516   68713 cri.go:89] found id: ""
	I0815 18:40:04.809538   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.809547   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:04.809552   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:04.809612   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:04.843822   68713 cri.go:89] found id: ""
	I0815 18:40:04.843850   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.843870   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:04.843878   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:04.843942   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:04.883853   68713 cri.go:89] found id: ""
	I0815 18:40:04.883881   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.883892   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:04.883900   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:04.883962   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:04.918811   68713 cri.go:89] found id: ""
	I0815 18:40:04.918838   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.918846   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:04.918852   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:04.918903   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:04.953076   68713 cri.go:89] found id: ""
	I0815 18:40:04.953101   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.953110   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:04.953116   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:04.953163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:04.988219   68713 cri.go:89] found id: ""
	I0815 18:40:04.988246   68713 logs.go:276] 0 containers: []
	W0815 18:40:04.988255   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:04.988264   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:04.988275   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:05.060859   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:05.060896   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:05.060913   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:05.146768   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:05.146817   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:05.187816   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:05.187845   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:05.239027   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:05.239067   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:07.754503   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:07.769608   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:07.769695   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:04.849108   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:06.850409   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:05.155042   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.654547   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:09.654825   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.658972   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:10.159255   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:07.804435   68713 cri.go:89] found id: ""
	I0815 18:40:07.804460   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.804468   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:07.804474   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:07.804551   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:07.839760   68713 cri.go:89] found id: ""
	I0815 18:40:07.839787   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.839797   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:07.839804   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:07.839868   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:07.877984   68713 cri.go:89] found id: ""
	I0815 18:40:07.878009   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.878017   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:07.878022   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:07.878070   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:07.914294   68713 cri.go:89] found id: ""
	I0815 18:40:07.914319   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.914328   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:07.914336   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:07.914395   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:07.948751   68713 cri.go:89] found id: ""
	I0815 18:40:07.948777   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.948787   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:07.948795   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:07.948861   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:07.982262   68713 cri.go:89] found id: ""
	I0815 18:40:07.982288   68713 logs.go:276] 0 containers: []
	W0815 18:40:07.982296   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:07.982302   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:07.982358   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:08.015560   68713 cri.go:89] found id: ""
	I0815 18:40:08.015588   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.015596   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:08.015602   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:08.015662   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:08.049854   68713 cri.go:89] found id: ""
	I0815 18:40:08.049878   68713 logs.go:276] 0 containers: []
	W0815 18:40:08.049885   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:08.049893   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:08.049905   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:08.102269   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:08.102303   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:08.117181   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:08.117209   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:08.188586   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:08.188609   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:08.188623   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:08.272204   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:08.272239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:10.813223   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:10.826181   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:10.826257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:10.863728   68713 cri.go:89] found id: ""
	I0815 18:40:10.863753   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.863761   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:10.863766   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:10.863813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:10.898074   68713 cri.go:89] found id: ""
	I0815 18:40:10.898102   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.898113   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:10.898121   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:10.898183   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:10.933948   68713 cri.go:89] found id: ""
	I0815 18:40:10.933980   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.933991   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:10.933998   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:10.934059   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:10.972402   68713 cri.go:89] found id: ""
	I0815 18:40:10.972428   68713 logs.go:276] 0 containers: []
	W0815 18:40:10.972436   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:10.972442   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:10.972509   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:11.006814   68713 cri.go:89] found id: ""
	I0815 18:40:11.006843   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.006851   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:11.006857   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:11.006909   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:11.042739   68713 cri.go:89] found id: ""
	I0815 18:40:11.042763   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.042771   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:11.042777   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:11.042835   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:11.079132   68713 cri.go:89] found id: ""
	I0815 18:40:11.079164   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.079173   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:11.079179   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:11.079228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:11.113271   68713 cri.go:89] found id: ""
	I0815 18:40:11.113298   68713 logs.go:276] 0 containers: []
	W0815 18:40:11.113309   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:11.113317   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:11.113328   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:11.166669   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:11.166698   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:11.180789   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:11.180815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:11.247954   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:11.247985   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:11.247999   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:11.331952   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:11.331995   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:09.349194   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:11.349627   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.850439   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:11.655088   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.656674   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:12.658287   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:15.158361   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:13.874466   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:13.888346   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:13.888416   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:13.922542   68713 cri.go:89] found id: ""
	I0815 18:40:13.922569   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.922579   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:13.922586   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:13.922654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:13.958039   68713 cri.go:89] found id: ""
	I0815 18:40:13.958066   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.958076   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:13.958082   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:13.958131   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:13.994095   68713 cri.go:89] found id: ""
	I0815 18:40:13.994125   68713 logs.go:276] 0 containers: []
	W0815 18:40:13.994136   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:13.994144   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:13.994195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:14.027918   68713 cri.go:89] found id: ""
	I0815 18:40:14.027949   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.027960   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:14.027969   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:14.028027   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:14.063849   68713 cri.go:89] found id: ""
	I0815 18:40:14.063879   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.063889   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:14.063897   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:14.063957   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:14.098444   68713 cri.go:89] found id: ""
	I0815 18:40:14.098473   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.098483   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:14.098490   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:14.098553   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:14.136834   68713 cri.go:89] found id: ""
	I0815 18:40:14.136861   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.136874   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:14.136880   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:14.136925   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:14.172377   68713 cri.go:89] found id: ""
	I0815 18:40:14.172400   68713 logs.go:276] 0 containers: []
	W0815 18:40:14.172408   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:14.172415   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:14.172430   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:14.212212   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:14.212242   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:14.268412   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:14.268450   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:14.282978   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:14.283006   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:14.352777   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:14.352796   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:14.352822   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:16.939906   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:16.953118   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:16.953178   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:16.991697   68713 cri.go:89] found id: ""
	I0815 18:40:16.991723   68713 logs.go:276] 0 containers: []
	W0815 18:40:16.991731   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:16.991736   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:16.991801   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:17.027572   68713 cri.go:89] found id: ""
	I0815 18:40:17.027602   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.027613   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:17.027623   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:17.027682   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:17.060718   68713 cri.go:89] found id: ""
	I0815 18:40:17.060750   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.060763   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:17.060771   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:17.060829   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:17.096746   68713 cri.go:89] found id: ""
	I0815 18:40:17.096771   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.096780   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:17.096786   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:17.096846   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:17.130755   68713 cri.go:89] found id: ""
	I0815 18:40:17.130791   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.130802   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:17.130810   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:17.130872   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:17.167991   68713 cri.go:89] found id: ""
	I0815 18:40:17.168016   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.168026   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:17.168034   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:17.168093   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:17.200695   68713 cri.go:89] found id: ""
	I0815 18:40:17.200722   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.200733   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:17.200741   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:17.200799   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:17.237788   68713 cri.go:89] found id: ""
	I0815 18:40:17.237816   68713 logs.go:276] 0 containers: []
	W0815 18:40:17.237824   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:17.237833   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:17.237848   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:17.288888   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:17.288921   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:17.302862   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:17.302903   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:17.370062   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:17.370085   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:17.370100   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:17.444742   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:17.444781   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:16.349749   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:18.849197   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:16.155555   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:18.654875   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:17.160009   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:19.657774   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:19.984813   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:19.998010   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:19.998077   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:20.032880   68713 cri.go:89] found id: ""
	I0815 18:40:20.032903   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.032912   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:20.032918   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:20.032973   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:20.069191   68713 cri.go:89] found id: ""
	I0815 18:40:20.069224   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.069236   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:20.069243   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:20.069301   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:20.101930   68713 cri.go:89] found id: ""
	I0815 18:40:20.101954   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.101962   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:20.101968   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:20.102016   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:20.136981   68713 cri.go:89] found id: ""
	I0815 18:40:20.137006   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.137014   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:20.137020   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:20.137066   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:20.174517   68713 cri.go:89] found id: ""
	I0815 18:40:20.174543   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.174550   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:20.174556   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:20.174611   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:20.208525   68713 cri.go:89] found id: ""
	I0815 18:40:20.208549   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.208559   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:20.208567   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:20.208626   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:20.240824   68713 cri.go:89] found id: ""
	I0815 18:40:20.240855   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.240867   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:20.240874   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:20.240946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:20.277683   68713 cri.go:89] found id: ""
	I0815 18:40:20.277710   68713 logs.go:276] 0 containers: []
	W0815 18:40:20.277720   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:20.277728   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:20.277739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:20.324271   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:20.324304   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:20.376250   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:20.376285   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:20.392777   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:20.392813   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:20.464122   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:20.464156   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:20.464180   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:20.849461   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:22.849591   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:20.654982   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.154537   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:21.658354   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.658505   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:23.041684   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:23.055779   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:23.055858   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:23.095391   68713 cri.go:89] found id: ""
	I0815 18:40:23.095414   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.095426   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:23.095432   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:23.095483   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:23.134907   68713 cri.go:89] found id: ""
	I0815 18:40:23.134936   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.134943   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:23.134949   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:23.134994   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:23.171806   68713 cri.go:89] found id: ""
	I0815 18:40:23.171845   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.171854   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:23.171861   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:23.171924   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:23.205378   68713 cri.go:89] found id: ""
	I0815 18:40:23.205404   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.205412   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:23.205417   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:23.205467   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:23.239503   68713 cri.go:89] found id: ""
	I0815 18:40:23.239531   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.239540   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:23.239547   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:23.239614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:23.275802   68713 cri.go:89] found id: ""
	I0815 18:40:23.275828   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.275842   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:23.275849   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:23.275894   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:23.310127   68713 cri.go:89] found id: ""
	I0815 18:40:23.310154   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.310167   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:23.310173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:23.310219   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:23.344646   68713 cri.go:89] found id: ""
	I0815 18:40:23.344674   68713 logs.go:276] 0 containers: []
	W0815 18:40:23.344685   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:23.344696   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:23.344711   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:23.397260   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:23.397310   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:23.425518   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:23.425553   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:23.495528   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:23.495547   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:23.495562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:23.574489   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:23.574524   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.119044   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:26.133806   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:26.133880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:26.175683   68713 cri.go:89] found id: ""
	I0815 18:40:26.175711   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.175722   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:26.175730   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:26.175789   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:26.210634   68713 cri.go:89] found id: ""
	I0815 18:40:26.210658   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.210665   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:26.210671   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:26.210724   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:26.244146   68713 cri.go:89] found id: ""
	I0815 18:40:26.244176   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.244187   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:26.244195   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:26.244274   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:26.277312   68713 cri.go:89] found id: ""
	I0815 18:40:26.277335   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.277343   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:26.277349   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:26.277410   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:26.311538   68713 cri.go:89] found id: ""
	I0815 18:40:26.311562   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.311570   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:26.311576   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:26.311623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:26.347816   68713 cri.go:89] found id: ""
	I0815 18:40:26.347840   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.347847   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:26.347853   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:26.347906   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:26.381211   68713 cri.go:89] found id: ""
	I0815 18:40:26.381234   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.381242   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:26.381248   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:26.381303   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:26.413982   68713 cri.go:89] found id: ""
	I0815 18:40:26.414010   68713 logs.go:276] 0 containers: []
	W0815 18:40:26.414018   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:26.414027   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:26.414038   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:26.500686   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:26.500721   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:26.537615   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:26.537642   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:26.590119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:26.590150   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:26.603713   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:26.603739   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:26.675455   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:25.349400   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:27.853388   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:25.155463   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:27.155580   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:29.156973   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:26.158898   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:28.658576   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:29.176084   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:29.189743   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:29.189813   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:29.225500   68713 cri.go:89] found id: ""
	I0815 18:40:29.225536   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.225548   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:29.225557   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:29.225614   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:29.261839   68713 cri.go:89] found id: ""
	I0815 18:40:29.261866   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.261877   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:29.261884   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:29.261946   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:29.296685   68713 cri.go:89] found id: ""
	I0815 18:40:29.296708   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.296716   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:29.296728   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:29.296787   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:29.332524   68713 cri.go:89] found id: ""
	I0815 18:40:29.332550   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.332558   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:29.332564   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:29.332615   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:29.368918   68713 cri.go:89] found id: ""
	I0815 18:40:29.368943   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.368953   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:29.368961   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:29.369020   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:29.403175   68713 cri.go:89] found id: ""
	I0815 18:40:29.403200   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.403211   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:29.403218   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:29.403279   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:29.438957   68713 cri.go:89] found id: ""
	I0815 18:40:29.438981   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.438989   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:29.438994   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:29.439051   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:29.472153   68713 cri.go:89] found id: ""
	I0815 18:40:29.472184   68713 logs.go:276] 0 containers: []
	W0815 18:40:29.472195   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:29.472206   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:29.472221   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:29.560484   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:29.560547   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:29.600366   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:29.600402   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:29.656536   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:29.656569   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:29.669899   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:29.669925   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:29.738515   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:32.239207   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:32.253976   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:32.254048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:32.290918   68713 cri.go:89] found id: ""
	I0815 18:40:32.290942   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.290951   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:32.290957   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:32.291009   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:32.325567   68713 cri.go:89] found id: ""
	I0815 18:40:32.325596   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.325606   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:32.325613   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:32.325674   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:32.360959   68713 cri.go:89] found id: ""
	I0815 18:40:32.360994   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.361005   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:32.361015   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:32.361090   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:32.398583   68713 cri.go:89] found id: ""
	I0815 18:40:32.398614   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.398625   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:32.398633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:32.398696   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:32.432980   68713 cri.go:89] found id: ""
	I0815 18:40:32.433007   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.433017   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:32.433024   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:32.433088   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:32.467645   68713 cri.go:89] found id: ""
	I0815 18:40:32.467678   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.467688   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:32.467697   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:32.467757   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:32.504233   68713 cri.go:89] found id: ""
	I0815 18:40:32.504265   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.504275   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:32.504282   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:32.504347   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:32.539127   68713 cri.go:89] found id: ""
	I0815 18:40:32.539160   68713 logs.go:276] 0 containers: []
	W0815 18:40:32.539175   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:32.539186   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:32.539200   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:32.620782   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:32.620818   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:32.660920   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:32.660950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:32.714392   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:32.714425   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:32.727629   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:32.727655   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:40:30.349267   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:32.349896   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:31.655451   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:34.154871   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:31.157219   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:33.158733   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:35.158871   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	W0815 18:40:32.801258   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.301393   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:35.315460   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:35.315515   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:35.352266   68713 cri.go:89] found id: ""
	I0815 18:40:35.352287   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.352295   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:35.352301   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:35.352345   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:35.387274   68713 cri.go:89] found id: ""
	I0815 18:40:35.387305   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.387316   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:35.387324   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:35.387386   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:35.422376   68713 cri.go:89] found id: ""
	I0815 18:40:35.422403   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.422413   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:35.422419   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:35.422464   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:35.456423   68713 cri.go:89] found id: ""
	I0815 18:40:35.456452   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.456459   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:35.456465   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:35.456544   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:35.494878   68713 cri.go:89] found id: ""
	I0815 18:40:35.494903   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.494912   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:35.494919   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:35.494980   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:35.528027   68713 cri.go:89] found id: ""
	I0815 18:40:35.528051   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.528062   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:35.528069   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:35.528128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:35.568543   68713 cri.go:89] found id: ""
	I0815 18:40:35.568570   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.568580   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:35.568587   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:35.568654   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:35.627717   68713 cri.go:89] found id: ""
	I0815 18:40:35.627747   68713 logs.go:276] 0 containers: []
	W0815 18:40:35.627766   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:35.627777   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:35.627792   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:35.691497   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:35.691530   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:35.705062   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:35.705092   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:35.783785   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:35.783806   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:35.783819   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:35.867282   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:35.867317   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:34.848226   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:36.849242   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.850686   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:36.154981   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.155165   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:37.659017   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:40.158408   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:38.407940   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:38.421571   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:38.421648   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:38.456551   68713 cri.go:89] found id: ""
	I0815 18:40:38.456586   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.456597   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:38.456604   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:38.456665   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:38.494133   68713 cri.go:89] found id: ""
	I0815 18:40:38.494167   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.494179   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:38.494186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:38.494253   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:38.531566   68713 cri.go:89] found id: ""
	I0815 18:40:38.531599   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.531610   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:38.531617   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:38.531678   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:38.567613   68713 cri.go:89] found id: ""
	I0815 18:40:38.567640   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.567652   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:38.567659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:38.567717   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:38.603172   68713 cri.go:89] found id: ""
	I0815 18:40:38.603201   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.603212   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:38.603225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:38.603284   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:38.639600   68713 cri.go:89] found id: ""
	I0815 18:40:38.639629   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.639640   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:38.639648   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:38.639710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:38.675780   68713 cri.go:89] found id: ""
	I0815 18:40:38.675811   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.675821   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:38.675828   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:38.675885   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:38.708745   68713 cri.go:89] found id: ""
	I0815 18:40:38.708775   68713 logs.go:276] 0 containers: []
	W0815 18:40:38.708786   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:38.708796   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:38.708815   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:38.722485   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:38.722514   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:38.793913   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:38.793936   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:38.793950   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:38.880706   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:38.880744   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:38.919505   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:38.919533   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.472452   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:41.486204   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:41.486264   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:41.520251   68713 cri.go:89] found id: ""
	I0815 18:40:41.520282   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.520294   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:41.520302   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:41.520362   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:41.561294   68713 cri.go:89] found id: ""
	I0815 18:40:41.561325   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.561336   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:41.561343   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:41.561403   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:41.595290   68713 cri.go:89] found id: ""
	I0815 18:40:41.595318   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.595326   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:41.595331   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:41.595381   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:41.629706   68713 cri.go:89] found id: ""
	I0815 18:40:41.629736   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.629744   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:41.629750   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:41.629816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:41.671862   68713 cri.go:89] found id: ""
	I0815 18:40:41.671885   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.671893   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:41.671898   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:41.671951   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:41.710298   68713 cri.go:89] found id: ""
	I0815 18:40:41.710349   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.710360   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:41.710368   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:41.710425   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:41.745434   68713 cri.go:89] found id: ""
	I0815 18:40:41.745472   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.745487   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:41.745492   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:41.745548   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:41.781038   68713 cri.go:89] found id: ""
	I0815 18:40:41.781073   68713 logs.go:276] 0 containers: []
	W0815 18:40:41.781081   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:41.781088   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:41.781099   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:41.863977   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:41.864023   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:41.907477   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:41.907505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:41.962921   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:41.962956   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:41.976458   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:41.976505   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:42.044372   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:41.349260   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:43.349615   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:40.656633   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:43.154626   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:42.658519   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:44.659640   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:44.544803   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:44.559538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:44.559595   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:44.595471   68713 cri.go:89] found id: ""
	I0815 18:40:44.595501   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.595511   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:44.595518   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:44.595581   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:44.630148   68713 cri.go:89] found id: ""
	I0815 18:40:44.630173   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.630181   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:44.630189   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:44.630245   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:44.666084   68713 cri.go:89] found id: ""
	I0815 18:40:44.666110   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.666119   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:44.666126   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:44.666180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:44.700286   68713 cri.go:89] found id: ""
	I0815 18:40:44.700320   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.700331   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:44.700339   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:44.700394   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:44.734115   68713 cri.go:89] found id: ""
	I0815 18:40:44.734143   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.734151   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:44.734157   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:44.734216   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:44.770306   68713 cri.go:89] found id: ""
	I0815 18:40:44.770363   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.770376   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:44.770383   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:44.770453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:44.806766   68713 cri.go:89] found id: ""
	I0815 18:40:44.806790   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.806798   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:44.806803   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:44.806865   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:44.843574   68713 cri.go:89] found id: ""
	I0815 18:40:44.843603   68713 logs.go:276] 0 containers: []
	W0815 18:40:44.843613   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:44.843623   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:44.843638   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:44.896119   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:44.896148   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:44.909537   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:44.909562   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:44.980268   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:44.980290   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:44.980307   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:45.066589   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:45.066626   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:47.605934   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:47.620644   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:47.620709   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:47.660939   68713 cri.go:89] found id: ""
	I0815 18:40:47.660960   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.660967   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:47.660973   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:47.661021   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:47.701018   68713 cri.go:89] found id: ""
	I0815 18:40:47.701047   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.701059   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:47.701107   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:47.701177   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:47.739487   68713 cri.go:89] found id: ""
	I0815 18:40:47.739514   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.739523   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:47.739528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:47.739584   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:47.781483   68713 cri.go:89] found id: ""
	I0815 18:40:47.781508   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.781515   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:47.781520   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:47.781571   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:45.850565   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.851368   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:45.156177   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.654437   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.157895   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:49.658101   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:47.816781   68713 cri.go:89] found id: ""
	I0815 18:40:47.816806   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.816813   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:47.816819   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:47.816875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:47.853951   68713 cri.go:89] found id: ""
	I0815 18:40:47.853976   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.853984   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:47.853990   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:47.854062   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:47.892208   68713 cri.go:89] found id: ""
	I0815 18:40:47.892237   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.892246   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:47.892252   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:47.892311   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:47.926916   68713 cri.go:89] found id: ""
	I0815 18:40:47.926944   68713 logs.go:276] 0 containers: []
	W0815 18:40:47.926965   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:47.926976   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:47.926990   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:48.002907   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:48.002927   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:48.002942   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:48.085727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:48.085762   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:48.127192   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:48.127224   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:48.180172   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:48.180208   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:50.694573   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:50.709411   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:50.709472   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:50.750956   68713 cri.go:89] found id: ""
	I0815 18:40:50.750985   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.750994   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:50.751000   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:50.751048   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:50.791072   68713 cri.go:89] found id: ""
	I0815 18:40:50.791149   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.791174   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:50.791186   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:50.791247   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:50.827692   68713 cri.go:89] found id: ""
	I0815 18:40:50.827717   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.827728   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:50.827735   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:50.827794   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:50.866587   68713 cri.go:89] found id: ""
	I0815 18:40:50.866616   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.866626   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:50.866633   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:50.866692   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:50.907012   68713 cri.go:89] found id: ""
	I0815 18:40:50.907040   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.907047   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:50.907053   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:50.907101   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:50.951212   68713 cri.go:89] found id: ""
	I0815 18:40:50.951243   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.951256   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:50.951263   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:50.951316   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:50.989771   68713 cri.go:89] found id: ""
	I0815 18:40:50.989802   68713 logs.go:276] 0 containers: []
	W0815 18:40:50.989812   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:50.989818   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:50.989867   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:51.024423   68713 cri.go:89] found id: ""
	I0815 18:40:51.024454   68713 logs.go:276] 0 containers: []
	W0815 18:40:51.024465   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:51.024475   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:51.024500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:51.076973   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:51.077012   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:51.090963   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:51.090989   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:51.169981   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:51.170005   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:51.170029   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:51.248990   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:51.249040   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:50.349092   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:52.350278   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:50.154517   68248 pod_ready.go:103] pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:52.148131   68248 pod_ready.go:82] duration metric: took 4m0.000077937s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" ...
	E0815 18:40:52.148161   68248 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-wp5rn" in "kube-system" namespace to be "Ready" (will not retry!)
	I0815 18:40:52.148183   68248 pod_ready.go:39] duration metric: took 4m13.224994468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:40:52.148235   68248 kubeadm.go:597] duration metric: took 4m20.945128985s to restartPrimaryControlPlane
	W0815 18:40:52.148324   68248 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 18:40:52.148376   68248 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:40:51.660289   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:54.157718   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:53.790172   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:53.803752   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:53.803816   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:53.843203   68713 cri.go:89] found id: ""
	I0815 18:40:53.843231   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.843246   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:53.843254   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:53.843314   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:53.878975   68713 cri.go:89] found id: ""
	I0815 18:40:53.879000   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.879008   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:53.879013   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:53.879078   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:53.915640   68713 cri.go:89] found id: ""
	I0815 18:40:53.915668   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.915675   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:53.915683   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:53.915746   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:53.956312   68713 cri.go:89] found id: ""
	I0815 18:40:53.956340   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.956356   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:53.956365   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:53.956426   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:53.992276   68713 cri.go:89] found id: ""
	I0815 18:40:53.992304   68713 logs.go:276] 0 containers: []
	W0815 18:40:53.992314   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:53.992322   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:53.992387   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:54.034653   68713 cri.go:89] found id: ""
	I0815 18:40:54.034682   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.034693   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:54.034701   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:54.034761   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:54.072993   68713 cri.go:89] found id: ""
	I0815 18:40:54.073018   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.073027   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:54.073038   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:54.073107   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:54.107414   68713 cri.go:89] found id: ""
	I0815 18:40:54.107446   68713 logs.go:276] 0 containers: []
	W0815 18:40:54.107456   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:54.107466   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:54.107481   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:54.145900   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:54.145928   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:54.197609   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:54.197639   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:54.211384   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:54.211410   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:54.280991   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:54.281018   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:54.281031   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:56.868270   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:56.881168   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:56.881248   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:56.915206   68713 cri.go:89] found id: ""
	I0815 18:40:56.915235   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.915243   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:56.915249   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:56.915308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:40:56.950838   68713 cri.go:89] found id: ""
	I0815 18:40:56.950864   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.950873   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:40:56.950879   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:40:56.950937   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:40:56.993625   68713 cri.go:89] found id: ""
	I0815 18:40:56.993649   68713 logs.go:276] 0 containers: []
	W0815 18:40:56.993656   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:40:56.993662   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:40:56.993718   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:40:57.029109   68713 cri.go:89] found id: ""
	I0815 18:40:57.029139   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.029150   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:40:57.029158   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:40:57.029213   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:40:57.063480   68713 cri.go:89] found id: ""
	I0815 18:40:57.063518   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.063530   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:40:57.063538   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:40:57.063598   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:40:57.102830   68713 cri.go:89] found id: ""
	I0815 18:40:57.102859   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.102870   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:40:57.102877   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:40:57.102938   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:40:57.137116   68713 cri.go:89] found id: ""
	I0815 18:40:57.137146   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.137159   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:40:57.137173   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:40:57.137235   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:40:57.174678   68713 cri.go:89] found id: ""
	I0815 18:40:57.174706   68713 logs.go:276] 0 containers: []
	W0815 18:40:57.174717   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:40:57.174727   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:40:57.174741   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:40:57.213270   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:40:57.213311   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:57.269463   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:40:57.269500   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:40:57.283891   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:40:57.283915   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:40:57.355563   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:40:57.355589   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:40:57.355601   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:40:54.849266   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:57.350343   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:56.657843   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:58.658098   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:40:59.943493   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:40:59.957225   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:40:59.957285   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:40:59.993113   68713 cri.go:89] found id: ""
	I0815 18:40:59.993142   68713 logs.go:276] 0 containers: []
	W0815 18:40:59.993153   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:40:59.993167   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:40:59.993228   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:00.033485   68713 cri.go:89] found id: ""
	I0815 18:41:00.033515   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.033525   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:00.033533   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:00.033594   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:00.070808   68713 cri.go:89] found id: ""
	I0815 18:41:00.070830   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.070838   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:00.070844   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:00.070893   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:00.113043   68713 cri.go:89] found id: ""
	I0815 18:41:00.113067   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.113076   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:00.113082   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:00.113139   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:00.148089   68713 cri.go:89] found id: ""
	I0815 18:41:00.148118   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.148129   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:00.148136   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:00.148206   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:00.188343   68713 cri.go:89] found id: ""
	I0815 18:41:00.188375   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.188386   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:00.188394   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:00.188448   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:00.224287   68713 cri.go:89] found id: ""
	I0815 18:41:00.224312   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.224323   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:00.224337   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:00.224398   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:00.263983   68713 cri.go:89] found id: ""
	I0815 18:41:00.264008   68713 logs.go:276] 0 containers: []
	W0815 18:41:00.264016   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:00.264025   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:00.264037   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:00.278057   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:00.278083   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:00.355112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:00.355133   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:00.355146   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:00.436636   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:00.436672   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:00.474774   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:00.474801   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:40:59.849797   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:02.349363   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:01.158004   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:03.158380   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:05.658860   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:03.027434   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:03.041422   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:03.041496   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:03.074093   68713 cri.go:89] found id: ""
	I0815 18:41:03.074119   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.074130   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:03.074138   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:03.074198   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:03.111489   68713 cri.go:89] found id: ""
	I0815 18:41:03.111517   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.111529   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:03.111537   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:03.111599   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:03.147716   68713 cri.go:89] found id: ""
	I0815 18:41:03.147747   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.147756   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:03.147762   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:03.147825   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:03.184609   68713 cri.go:89] found id: ""
	I0815 18:41:03.184635   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.184644   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:03.184652   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:03.184710   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:03.221839   68713 cri.go:89] found id: ""
	I0815 18:41:03.221869   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.221878   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:03.221883   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:03.221935   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:03.262619   68713 cri.go:89] found id: ""
	I0815 18:41:03.262649   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.262661   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:03.262669   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:03.262733   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:03.297826   68713 cri.go:89] found id: ""
	I0815 18:41:03.297849   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.297864   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:03.297875   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:03.297922   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:03.345046   68713 cri.go:89] found id: ""
	I0815 18:41:03.345074   68713 logs.go:276] 0 containers: []
	W0815 18:41:03.345083   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:03.345095   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:03.345133   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:03.416878   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:03.416905   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:03.416920   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:03.491548   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:03.491583   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:03.533821   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:03.533852   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:03.587749   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:03.587787   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.104002   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:06.118123   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:06.118195   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:06.156179   68713 cri.go:89] found id: ""
	I0815 18:41:06.156204   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.156213   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:06.156218   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:06.156275   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:06.192834   68713 cri.go:89] found id: ""
	I0815 18:41:06.192858   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.192866   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:06.192871   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:06.192918   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:06.228355   68713 cri.go:89] found id: ""
	I0815 18:41:06.228379   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.228387   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:06.228393   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:06.228453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:06.262041   68713 cri.go:89] found id: ""
	I0815 18:41:06.262068   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.262079   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:06.262086   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:06.262152   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:06.303217   68713 cri.go:89] found id: ""
	I0815 18:41:06.303249   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.303261   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:06.303268   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:06.303335   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:06.337180   68713 cri.go:89] found id: ""
	I0815 18:41:06.337208   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.337215   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:06.337222   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:06.337270   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:06.375054   68713 cri.go:89] found id: ""
	I0815 18:41:06.375081   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.375088   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:06.375095   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:06.375163   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:06.412188   68713 cri.go:89] found id: ""
	I0815 18:41:06.412216   68713 logs.go:276] 0 containers: []
	W0815 18:41:06.412227   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:06.412239   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:06.412255   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:06.425607   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:06.425633   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:06.500853   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:06.500872   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:06.500883   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:06.577297   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:06.577333   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:06.620209   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:06.620239   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:04.848677   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:06.849254   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:08.849300   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:08.157734   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:10.157969   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:09.171606   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:09.184197   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:09.184257   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:09.217865   68713 cri.go:89] found id: ""
	I0815 18:41:09.217893   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.217904   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:09.217912   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:09.217967   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:09.254032   68713 cri.go:89] found id: ""
	I0815 18:41:09.254055   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.254064   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:09.254073   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:09.254128   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:09.291772   68713 cri.go:89] found id: ""
	I0815 18:41:09.291798   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.291808   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:09.291816   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:09.291880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:09.326695   68713 cri.go:89] found id: ""
	I0815 18:41:09.326717   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.326726   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:09.326731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:09.326791   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:09.365779   68713 cri.go:89] found id: ""
	I0815 18:41:09.365807   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.365818   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:09.365825   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:09.365880   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:09.413475   68713 cri.go:89] found id: ""
	I0815 18:41:09.413500   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.413509   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:09.413514   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:09.413578   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:09.449483   68713 cri.go:89] found id: ""
	I0815 18:41:09.449511   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.449521   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:09.449528   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:09.449623   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:09.487484   68713 cri.go:89] found id: ""
	I0815 18:41:09.487513   68713 logs.go:276] 0 containers: []
	W0815 18:41:09.487525   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:09.487535   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:09.487549   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:09.536746   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:09.536777   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:09.549912   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:09.549944   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:09.619192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:09.619227   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:09.619246   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:09.698370   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:09.698404   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:12.240745   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:12.254814   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:12.254875   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:12.291346   68713 cri.go:89] found id: ""
	I0815 18:41:12.291376   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.291387   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:41:12.291395   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:12.291456   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:12.324832   68713 cri.go:89] found id: ""
	I0815 18:41:12.324867   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.324878   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:41:12.324886   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:12.324950   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:12.360172   68713 cri.go:89] found id: ""
	I0815 18:41:12.360193   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.360201   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:41:12.360206   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:12.360251   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:12.394671   68713 cri.go:89] found id: ""
	I0815 18:41:12.394700   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.394710   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:41:12.394731   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:12.394800   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:12.428951   68713 cri.go:89] found id: ""
	I0815 18:41:12.428999   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.429007   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:41:12.429013   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:12.429057   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:12.466035   68713 cri.go:89] found id: ""
	I0815 18:41:12.466061   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.466069   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:41:12.466075   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:12.466125   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:12.500003   68713 cri.go:89] found id: ""
	I0815 18:41:12.500031   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.500042   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:12.500050   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:41:12.500105   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:41:12.537433   68713 cri.go:89] found id: ""
	I0815 18:41:12.537457   68713 logs.go:276] 0 containers: []
	W0815 18:41:12.537464   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:41:12.537473   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:12.537484   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:12.586768   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:12.586809   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:12.600549   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:12.600578   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:41:12.673112   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:41:12.673138   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:12.673154   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:12.754689   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:41:12.754726   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:11.348767   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:13.349973   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:12.158249   68429 pod_ready.go:103] pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:13.158354   68429 pod_ready.go:82] duration metric: took 4m0.006607137s for pod "metrics-server-6867b74b74-8mppk" in "kube-system" namespace to be "Ready" ...
	E0815 18:41:13.158373   68429 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 18:41:13.158381   68429 pod_ready.go:39] duration metric: took 4m7.064501997s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:13.158395   68429 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:13.158423   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:13.158467   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:13.203746   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:13.203771   68429 cri.go:89] found id: ""
	I0815 18:41:13.203779   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:13.203840   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.208188   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:13.208248   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:13.245326   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:13.245351   68429 cri.go:89] found id: ""
	I0815 18:41:13.245359   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:13.245412   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.250212   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:13.250281   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:13.296537   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:13.296565   68429 cri.go:89] found id: ""
	I0815 18:41:13.296576   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:13.296634   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.300823   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:13.300881   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:13.337973   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:13.338018   68429 cri.go:89] found id: ""
	I0815 18:41:13.338031   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:13.338083   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.342251   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:13.342307   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:13.379921   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:13.379948   68429 cri.go:89] found id: ""
	I0815 18:41:13.379957   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:13.380005   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.384451   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:13.384539   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:13.421077   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:13.421113   68429 cri.go:89] found id: ""
	I0815 18:41:13.421122   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:13.421180   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.425566   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:13.425640   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:13.468663   68429 cri.go:89] found id: ""
	I0815 18:41:13.468688   68429 logs.go:276] 0 containers: []
	W0815 18:41:13.468696   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:13.468701   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:13.468753   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:13.506689   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:13.506711   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:13.506715   68429 cri.go:89] found id: ""
	I0815 18:41:13.506723   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:13.506784   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.511177   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:13.515519   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:13.515543   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:13.583771   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:13.583806   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:13.714906   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:13.714945   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:13.766512   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:13.766548   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:13.818416   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:13.818450   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:13.859035   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:13.859073   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:13.901515   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:13.901546   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:14.437262   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:14.437304   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:14.453511   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:14.453551   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:14.489238   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:14.489267   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:14.540141   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:14.540184   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:14.574758   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:14.574785   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:14.609370   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:14.609398   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:15.294667   68713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:15.307758   68713 kubeadm.go:597] duration metric: took 4m2.67500099s to restartPrimaryControlPlane
	W0815 18:41:15.307840   68713 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0815 18:41:15.307872   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:41:15.761255   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:15.776049   68713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:41:15.786643   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:41:15.796517   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:41:15.796537   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:41:15.796585   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:41:15.806118   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:41:15.806167   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:41:15.816363   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:41:15.826396   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:41:15.826449   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:41:15.836538   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.847035   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:41:15.847093   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:41:15.857475   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:41:15.867084   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:41:15.867144   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:41:15.879736   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:41:15.954497   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:41:15.954588   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:41:16.098128   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:41:16.098244   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:41:16.098345   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:41:16.288507   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:41:16.290439   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:41:16.290555   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:41:16.290656   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:41:16.290756   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:41:16.290831   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:41:16.290923   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:41:16.291003   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:41:16.291096   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:41:16.291182   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:41:16.291280   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:41:16.291396   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:41:16.291457   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:41:16.291509   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:41:16.363570   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:41:16.549782   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:41:16.789250   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:41:16.983388   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:41:17.004293   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:41:17.006438   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:41:17.006485   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:41:17.154583   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:41:17.156594   68713 out.go:235]   - Booting up control plane ...
	I0815 18:41:17.156717   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:41:17.177351   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:41:17.179286   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:41:17.180313   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:41:17.183829   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:41:15.850424   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:18.348986   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:18.430273   68248 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.281857018s)
	I0815 18:41:18.430359   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:18.445633   68248 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 18:41:18.457459   68248 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:41:18.469748   68248 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:41:18.469769   68248 kubeadm.go:157] found existing configuration files:
	
	I0815 18:41:18.469818   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:41:18.480099   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:41:18.480146   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:41:18.491871   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:41:18.501274   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:41:18.501339   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:41:18.510186   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:41:18.518803   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:41:18.518863   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:41:18.527843   68248 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:41:18.536437   68248 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:41:18.536514   68248 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:41:18.545573   68248 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:41:18.596478   68248 kubeadm.go:310] W0815 18:41:18.577134    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:41:18.597311   68248 kubeadm.go:310] W0815 18:41:18.577958    2561 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 18:41:18.709937   68248 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:41:17.151343   68429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:17.173653   68429 api_server.go:72] duration metric: took 4m18.293407117s to wait for apiserver process to appear ...
	I0815 18:41:17.173677   68429 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:41:17.173724   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:17.173784   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:17.211484   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:17.211509   68429 cri.go:89] found id: ""
	I0815 18:41:17.211518   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:17.211583   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.216011   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:17.216107   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:17.265454   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:17.265486   68429 cri.go:89] found id: ""
	I0815 18:41:17.265497   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:17.265554   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.269804   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:17.269868   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:17.310339   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:17.310363   68429 cri.go:89] found id: ""
	I0815 18:41:17.310371   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:17.310435   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.315639   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:17.315695   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:17.352364   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:17.352387   68429 cri.go:89] found id: ""
	I0815 18:41:17.352396   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:17.352452   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.356782   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:17.356848   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:17.396704   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:17.396733   68429 cri.go:89] found id: ""
	I0815 18:41:17.396744   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:17.396799   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.400920   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:17.400985   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:17.440361   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:17.440390   68429 cri.go:89] found id: ""
	I0815 18:41:17.440400   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:17.440464   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.445057   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:17.445127   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:17.487341   68429 cri.go:89] found id: ""
	I0815 18:41:17.487369   68429 logs.go:276] 0 containers: []
	W0815 18:41:17.487380   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:17.487388   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:17.487446   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:17.528197   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:17.528218   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:17.528223   68429 cri.go:89] found id: ""
	I0815 18:41:17.528229   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:17.528285   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.532536   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:17.536745   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:17.536768   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:17.574236   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:17.574268   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:17.617822   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:17.617853   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:17.673009   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:17.673037   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:17.717620   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:17.717647   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:17.764641   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:17.764671   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:17.815586   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:17.815618   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:17.855287   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:17.855310   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:17.906486   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:17.906514   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:17.941540   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:17.941562   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:18.373461   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:18.373497   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:18.454203   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:18.454244   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:18.470284   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:18.470315   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:20.349635   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:22.350034   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:21.080947   68429 api_server.go:253] Checking apiserver healthz at https://192.168.61.7:8444/healthz ...
	I0815 18:41:21.085334   68429 api_server.go:279] https://192.168.61.7:8444/healthz returned 200:
	ok
	I0815 18:41:21.086420   68429 api_server.go:141] control plane version: v1.31.0
	I0815 18:41:21.086442   68429 api_server.go:131] duration metric: took 3.912756949s to wait for apiserver health ...
	I0815 18:41:21.086452   68429 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:41:21.086481   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:21.086537   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:21.124183   68429 cri.go:89] found id: "a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:21.124210   68429 cri.go:89] found id: ""
	I0815 18:41:21.124218   68429 logs.go:276] 1 containers: [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428]
	I0815 18:41:21.124285   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.128402   68429 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:21.128472   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:21.164737   68429 cri.go:89] found id: "7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:21.164768   68429 cri.go:89] found id: ""
	I0815 18:41:21.164779   68429 logs.go:276] 1 containers: [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3]
	I0815 18:41:21.164835   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.170622   68429 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:21.170699   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:21.206823   68429 cri.go:89] found id: "4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:21.206847   68429 cri.go:89] found id: ""
	I0815 18:41:21.206855   68429 logs.go:276] 1 containers: [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99]
	I0815 18:41:21.206910   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.211055   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:21.211128   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:21.255529   68429 cri.go:89] found id: "4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:21.255555   68429 cri.go:89] found id: ""
	I0815 18:41:21.255565   68429 logs.go:276] 1 containers: [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2]
	I0815 18:41:21.255629   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.260062   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:21.260139   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:21.298058   68429 cri.go:89] found id: "78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:21.298116   68429 cri.go:89] found id: ""
	I0815 18:41:21.298124   68429 logs.go:276] 1 containers: [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad]
	I0815 18:41:21.298180   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.302821   68429 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:21.302892   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:21.340895   68429 cri.go:89] found id: "b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:21.340925   68429 cri.go:89] found id: ""
	I0815 18:41:21.340936   68429 logs.go:276] 1 containers: [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c]
	I0815 18:41:21.341003   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.345545   68429 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:21.345638   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:21.383180   68429 cri.go:89] found id: ""
	I0815 18:41:21.383212   68429 logs.go:276] 0 containers: []
	W0815 18:41:21.383223   68429 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:21.383232   68429 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:21.383301   68429 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:21.421152   68429 cri.go:89] found id: "5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:21.421178   68429 cri.go:89] found id: "de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:21.421185   68429 cri.go:89] found id: ""
	I0815 18:41:21.421198   68429 logs.go:276] 2 containers: [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87]
	I0815 18:41:21.421257   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.426326   68429 ssh_runner.go:195] Run: which crictl
	I0815 18:41:21.430307   68429 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:21.430351   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:41:21.562655   68429 logs.go:123] Gathering logs for kube-apiserver [a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428] ...
	I0815 18:41:21.562697   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a728cb5e05d1dc5a0b906c91d548aae0752bca4ab4513d16a0edc43619e11428"
	I0815 18:41:21.613436   68429 logs.go:123] Gathering logs for etcd [7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3] ...
	I0815 18:41:21.613470   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c7302ebd91e38ebcd82ade46758b02832a93abb6cab3f854f788180086a4fe3"
	I0815 18:41:21.674678   68429 logs.go:123] Gathering logs for coredns [4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99] ...
	I0815 18:41:21.674721   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4002a75569d01be320f402b571dd245f409796deec00d408a3e39979d5d86e99"
	I0815 18:41:21.717283   68429 logs.go:123] Gathering logs for kube-scheduler [4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2] ...
	I0815 18:41:21.717316   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ff0eaf196e914b0db29e9e93e8cec79592e09441481f4a8e7e8601c7801d0a2"
	I0815 18:41:21.760218   68429 logs.go:123] Gathering logs for kube-proxy [78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad] ...
	I0815 18:41:21.760249   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78aa18ab3ca1d3548c4c14caf659fc9e3da7c19cdc8aad60653bdd06d745acad"
	I0815 18:41:21.802313   68429 logs.go:123] Gathering logs for kube-controller-manager [b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c] ...
	I0815 18:41:21.802352   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5437880e3b54433be49592fef26d0b2f2f7fcd849f6de5e2641579496290a3c"
	I0815 18:41:21.874565   68429 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:21.874608   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:21.891629   68429 logs.go:123] Gathering logs for container status ...
	I0815 18:41:21.891666   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:41:21.934128   68429 logs.go:123] Gathering logs for storage-provisioner [de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87] ...
	I0815 18:41:21.934170   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de97b6534ff12ac62d5b186d225c63fb03f093c65728f389a51fba174c30fc87"
	I0815 18:41:21.985467   68429 logs.go:123] Gathering logs for storage-provisioner [5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e] ...
	I0815 18:41:21.985497   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ba0de31ac4d01e78b1b0a8824be2b96c7a18d51f5220618e67303f1bbc96b1e"
	I0815 18:41:22.023731   68429 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:41:22.023770   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:41:22.403584   68429 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:22.403626   68429 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:25.005734   68429 system_pods.go:59] 8 kube-system pods found
	I0815 18:41:25.005760   68429 system_pods.go:61] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running
	I0815 18:41:25.005766   68429 system_pods.go:61] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running
	I0815 18:41:25.005770   68429 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running
	I0815 18:41:25.005775   68429 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running
	I0815 18:41:25.005778   68429 system_pods.go:61] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:41:25.005781   68429 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running
	I0815 18:41:25.005788   68429 system_pods.go:61] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:25.005793   68429 system_pods.go:61] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:41:25.005799   68429 system_pods.go:74] duration metric: took 3.919341536s to wait for pod list to return data ...
	I0815 18:41:25.005806   68429 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:41:25.008398   68429 default_sa.go:45] found service account: "default"
	I0815 18:41:25.008419   68429 default_sa.go:55] duration metric: took 2.608281ms for default service account to be created ...
	I0815 18:41:25.008427   68429 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:41:25.012784   68429 system_pods.go:86] 8 kube-system pods found
	I0815 18:41:25.012804   68429 system_pods.go:89] "coredns-6f6b679f8f-brc2r" [d16add35-fdfd-4a39-8814-ec74318ae245] Running
	I0815 18:41:25.012810   68429 system_pods.go:89] "etcd-default-k8s-diff-port-423062" [548842b6-9adc-487f-850c-7453f38ac2da] Running
	I0815 18:41:25.012817   68429 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-423062" [b4e3c851-64bd-43ab-9ff4-216286b09e13] Running
	I0815 18:41:25.012821   68429 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-423062" [73b5912c-3eaf-46a2-90fb-71f8a3b5fb3f] Running
	I0815 18:41:25.012825   68429 system_pods.go:89] "kube-proxy-bnxv7" [f3915f67-899a-40b9-bb2a-adef461b6320] Running
	I0815 18:41:25.012828   68429 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-423062" [96487715-b49c-4d24-837c-053a24617f71] Running
	I0815 18:41:25.012834   68429 system_pods.go:89] "metrics-server-6867b74b74-8mppk" [27b1cd42-fec2-44d2-95f4-207d5aedb1db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:25.012838   68429 system_pods.go:89] "storage-provisioner" [9645f17f-82b6-4f8c-9a37-203ed53fbea8] Running
	I0815 18:41:25.012850   68429 system_pods.go:126] duration metric: took 4.415694ms to wait for k8s-apps to be running ...
	I0815 18:41:25.012858   68429 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:41:25.012905   68429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:25.028245   68429 system_svc.go:56] duration metric: took 15.378403ms WaitForService to wait for kubelet
	I0815 18:41:25.028272   68429 kubeadm.go:582] duration metric: took 4m26.148030358s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:41:25.028290   68429 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:41:25.030696   68429 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:41:25.030717   68429 node_conditions.go:123] node cpu capacity is 2
	I0815 18:41:25.030728   68429 node_conditions.go:105] duration metric: took 2.43352ms to run NodePressure ...
	I0815 18:41:25.030742   68429 start.go:241] waiting for startup goroutines ...
	I0815 18:41:25.030751   68429 start.go:246] waiting for cluster config update ...
	I0815 18:41:25.030768   68429 start.go:255] writing updated cluster config ...
	I0815 18:41:25.031028   68429 ssh_runner.go:195] Run: rm -f paused
	I0815 18:41:25.077910   68429 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:41:25.079973   68429 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-423062" cluster and "default" namespace by default
	I0815 18:41:27.911884   68248 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 18:41:27.911943   68248 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:41:27.912011   68248 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:41:27.912130   68248 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:41:27.912272   68248 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 18:41:27.912359   68248 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:41:27.913884   68248 out.go:235]   - Generating certificates and keys ...
	I0815 18:41:27.913990   68248 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:41:27.914092   68248 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:41:27.914197   68248 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:41:27.914289   68248 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:41:27.914362   68248 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:41:27.914433   68248 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:41:27.914521   68248 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:41:27.914606   68248 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:41:27.914859   68248 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:41:27.914984   68248 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:41:27.915040   68248 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:41:27.915119   68248 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:41:27.915190   68248 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:41:27.915268   68248 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 18:41:27.915336   68248 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:41:27.915419   68248 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:41:27.915500   68248 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:41:27.915606   68248 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:41:27.915691   68248 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:41:27.917229   68248 out.go:235]   - Booting up control plane ...
	I0815 18:41:27.917311   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:41:27.917377   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:41:27.917433   68248 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:41:27.917521   68248 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:41:27.917590   68248 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:41:27.917623   68248 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:41:27.917740   68248 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 18:41:27.917829   68248 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 18:41:27.917880   68248 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00200618s
	I0815 18:41:27.917954   68248 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 18:41:27.918011   68248 kubeadm.go:310] [api-check] The API server is healthy after 5.501605719s
	I0815 18:41:27.918122   68248 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 18:41:27.918268   68248 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 18:41:27.918361   68248 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 18:41:27.918626   68248 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-555028 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 18:41:27.918723   68248 kubeadm.go:310] [bootstrap-token] Using token: 99xu37.bm6hiisu91f6rbvd
	I0815 18:41:27.920248   68248 out.go:235]   - Configuring RBAC rules ...
	I0815 18:41:27.920360   68248 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 18:41:27.920467   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 18:41:27.920651   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 18:41:27.920785   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 18:41:27.920938   68248 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 18:41:27.921052   68248 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 18:41:27.921225   68248 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 18:41:27.921286   68248 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 18:41:27.921356   68248 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 18:41:27.921369   68248 kubeadm.go:310] 
	I0815 18:41:27.921422   68248 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 18:41:27.921428   68248 kubeadm.go:310] 
	I0815 18:41:27.921488   68248 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 18:41:27.921497   68248 kubeadm.go:310] 
	I0815 18:41:27.921521   68248 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 18:41:27.921570   68248 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 18:41:27.921619   68248 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 18:41:27.921625   68248 kubeadm.go:310] 
	I0815 18:41:27.921698   68248 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 18:41:27.921711   68248 kubeadm.go:310] 
	I0815 18:41:27.921776   68248 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 18:41:27.921787   68248 kubeadm.go:310] 
	I0815 18:41:27.921858   68248 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 18:41:27.921963   68248 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 18:41:27.922055   68248 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 18:41:27.922064   68248 kubeadm.go:310] 
	I0815 18:41:27.922166   68248 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 18:41:27.922281   68248 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 18:41:27.922306   68248 kubeadm.go:310] 
	I0815 18:41:27.922413   68248 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 99xu37.bm6hiisu91f6rbvd \
	I0815 18:41:27.922550   68248 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 \
	I0815 18:41:27.922593   68248 kubeadm.go:310] 	--control-plane 
	I0815 18:41:27.922603   68248 kubeadm.go:310] 
	I0815 18:41:27.922703   68248 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 18:41:27.922712   68248 kubeadm.go:310] 
	I0815 18:41:27.922800   68248 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 99xu37.bm6hiisu91f6rbvd \
	I0815 18:41:27.922901   68248 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a615a2fffd905671934e0efe80700eff15301401ccddf84bdfee23c4488ab2c2 
	I0815 18:41:27.922909   68248 cni.go:84] Creating CNI manager for ""
	I0815 18:41:27.922916   68248 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 18:41:27.924596   68248 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0815 18:41:24.849483   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:27.350715   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:27.926142   68248 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0815 18:41:27.938307   68248 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0815 18:41:27.958862   68248 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 18:41:27.958974   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:27.959032   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-555028 minikube.k8s.io/updated_at=2024_08_15T18_41_27_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=embed-certs-555028 minikube.k8s.io/primary=true
	I0815 18:41:28.156844   68248 ops.go:34] apiserver oom_adj: -16
	I0815 18:41:28.157122   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:28.657735   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:29.157713   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:29.658109   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:30.157486   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:30.657573   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.157463   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.658073   68248 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 18:41:31.757929   68248 kubeadm.go:1113] duration metric: took 3.799012728s to wait for elevateKubeSystemPrivileges
	I0815 18:41:31.757969   68248 kubeadm.go:394] duration metric: took 5m0.607372858s to StartCluster
	I0815 18:41:31.757992   68248 settings.go:142] acquiring lock: {Name:mkf1b73e879630caa9a1115f3bce4fc3aa73b198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:41:31.758070   68248 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:41:31.759686   68248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/kubeconfig: {Name:mk22b710b7a8429f47cace1c695c6ca6bf82796b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 18:41:31.759915   68248 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.234 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0815 18:41:31.759982   68248 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 18:41:31.760072   68248 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-555028"
	I0815 18:41:31.760090   68248 addons.go:69] Setting default-storageclass=true in profile "embed-certs-555028"
	I0815 18:41:31.760115   68248 addons.go:69] Setting metrics-server=true in profile "embed-certs-555028"
	I0815 18:41:31.760133   68248 addons.go:234] Setting addon metrics-server=true in "embed-certs-555028"
	W0815 18:41:31.760141   68248 addons.go:243] addon metrics-server should already be in state true
	I0815 18:41:31.760148   68248 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-555028"
	I0815 18:41:31.760174   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.760110   68248 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-555028"
	W0815 18:41:31.760230   68248 addons.go:243] addon storage-provisioner should already be in state true
	I0815 18:41:31.760270   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.760108   68248 config.go:182] Loaded profile config "embed-certs-555028": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:41:31.760603   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760619   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760637   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.760642   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.760658   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.760708   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.761566   68248 out.go:177] * Verifying Kubernetes components...
	I0815 18:41:31.762780   68248 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 18:41:31.777893   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37955
	I0815 18:41:31.778444   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.779021   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.779049   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.779496   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.780129   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.780182   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.780954   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40291
	I0815 18:41:31.781146   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39143
	I0815 18:41:31.781506   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.781586   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.782056   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.782061   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.782078   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.782079   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.782437   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.782494   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.782685   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.783004   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.783034   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.786246   68248 addons.go:234] Setting addon default-storageclass=true in "embed-certs-555028"
	W0815 18:41:31.786270   68248 addons.go:243] addon default-storageclass should already be in state true
	I0815 18:41:31.786300   68248 host.go:66] Checking if "embed-certs-555028" exists ...
	I0815 18:41:31.786682   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.786714   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.800152   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36619
	I0815 18:41:31.800729   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.801272   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.801295   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.801656   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.801835   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.803539   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39975
	I0815 18:41:31.803751   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.804058   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.804640   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.804660   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.805007   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.805157   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.806098   68248 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 18:41:31.806397   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0815 18:41:31.806814   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.807269   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.807450   68248 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:41:31.807466   68248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 18:41:31.807484   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.807744   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.807757   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.808066   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.808889   68248 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:41:31.808923   68248 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:41:31.809143   68248 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 18:41:31.810575   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 18:41:31.810593   68248 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 18:41:31.810619   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.810648   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.811760   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.811761   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.811802   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.811948   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.812101   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.812243   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.814211   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.814653   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.814675   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.814953   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.815117   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.815271   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.815391   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.829657   68248 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
	I0815 18:41:31.830122   68248 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:41:31.830710   68248 main.go:141] libmachine: Using API Version  1
	I0815 18:41:31.830734   68248 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:41:31.831077   68248 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:41:31.831291   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetState
	I0815 18:41:31.833016   68248 main.go:141] libmachine: (embed-certs-555028) Calling .DriverName
	I0815 18:41:31.833271   68248 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 18:41:31.833285   68248 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 18:41:31.833302   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHHostname
	I0815 18:41:31.836248   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.836655   68248 main.go:141] libmachine: (embed-certs-555028) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:59:7b", ip: ""} in network mk-embed-certs-555028: {Iface:virbr2 ExpiryTime:2024-08-15 19:36:17 +0000 UTC Type:0 Mac:52:54:00:5c:59:7b Iaid: IPaddr:192.168.50.234 Prefix:24 Hostname:embed-certs-555028 Clientid:01:52:54:00:5c:59:7b}
	I0815 18:41:31.836682   68248 main.go:141] libmachine: (embed-certs-555028) DBG | domain embed-certs-555028 has defined IP address 192.168.50.234 and MAC address 52:54:00:5c:59:7b in network mk-embed-certs-555028
	I0815 18:41:31.836908   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHPort
	I0815 18:41:31.837097   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHKeyPath
	I0815 18:41:31.837233   68248 main.go:141] libmachine: (embed-certs-555028) Calling .GetSSHUsername
	I0815 18:41:31.837410   68248 sshutil.go:53] new ssh client: &{IP:192.168.50.234 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/embed-certs-555028/id_rsa Username:docker}
	I0815 18:41:31.988466   68248 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 18:41:32.010147   68248 node_ready.go:35] waiting up to 6m0s for node "embed-certs-555028" to be "Ready" ...
	I0815 18:41:32.019505   68248 node_ready.go:49] node "embed-certs-555028" has status "Ready":"True"
	I0815 18:41:32.019529   68248 node_ready.go:38] duration metric: took 9.346825ms for node "embed-certs-555028" to be "Ready" ...
	I0815 18:41:32.019541   68248 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:32.032036   68248 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:32.125991   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 18:41:32.138532   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 18:41:32.138554   68248 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 18:41:32.155222   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 18:41:32.196478   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 18:41:32.196517   68248 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 18:41:32.270461   68248 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:41:32.270495   68248 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 18:41:32.405567   68248 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 18:41:33.205712   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.050454437s)
	I0815 18:41:33.205772   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.205785   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.205793   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.079759984s)
	I0815 18:41:33.205826   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.205838   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206153   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.206169   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206184   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206194   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206200   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.206205   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206210   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206218   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.206202   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.206226   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.206415   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206421   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.206430   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.206432   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.245033   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.245061   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.245328   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.245343   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.651886   68248 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.246273862s)
	I0815 18:41:33.651945   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.651960   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.652264   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.652307   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.652311   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.652326   68248 main.go:141] libmachine: Making call to close driver server
	I0815 18:41:33.652335   68248 main.go:141] libmachine: (embed-certs-555028) Calling .Close
	I0815 18:41:33.652618   68248 main.go:141] libmachine: Successfully made call to close driver server
	I0815 18:41:33.652640   68248 main.go:141] libmachine: Making call to close connection to plugin binary
	I0815 18:41:33.652650   68248 addons.go:475] Verifying addon metrics-server=true in "embed-certs-555028"
	I0815 18:41:33.652697   68248 main.go:141] libmachine: (embed-certs-555028) DBG | Closing plugin on server side
	I0815 18:41:33.654487   68248 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0815 18:41:29.848462   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:31.850877   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:33.655868   68248 addons.go:510] duration metric: took 1.89588756s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0815 18:41:34.044605   68248 pod_ready.go:103] pod "etcd-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:34.538170   68248 pod_ready.go:93] pod "etcd-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.538199   68248 pod_ready.go:82] duration metric: took 2.506135047s for pod "etcd-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.538212   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.543160   68248 pod_ready.go:93] pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.543182   68248 pod_ready.go:82] duration metric: took 4.962289ms for pod "kube-apiserver-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.543195   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.547126   68248 pod_ready.go:93] pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:34.547144   68248 pod_ready.go:82] duration metric: took 3.94279ms for pod "kube-controller-manager-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:34.547152   68248 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:36.553459   68248 pod_ready.go:103] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:37.555276   68248 pod_ready.go:93] pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace has status "Ready":"True"
	I0815 18:41:37.555299   68248 pod_ready.go:82] duration metric: took 3.008140869s for pod "kube-scheduler-embed-certs-555028" in "kube-system" namespace to be "Ready" ...
	I0815 18:41:37.555307   68248 pod_ready.go:39] duration metric: took 5.535754922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:37.555330   68248 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:37.555378   68248 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:41:37.575318   68248 api_server.go:72] duration metric: took 5.815371975s to wait for apiserver process to appear ...
	I0815 18:41:37.575344   68248 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:41:37.575361   68248 api_server.go:253] Checking apiserver healthz at https://192.168.50.234:8443/healthz ...
	I0815 18:41:37.580989   68248 api_server.go:279] https://192.168.50.234:8443/healthz returned 200:
	ok
	I0815 18:41:37.582142   68248 api_server.go:141] control plane version: v1.31.0
	I0815 18:41:37.582164   68248 api_server.go:131] duration metric: took 6.812732ms to wait for apiserver health ...
	I0815 18:41:37.582174   68248 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:41:37.589334   68248 system_pods.go:59] 9 kube-system pods found
	I0815 18:41:37.589366   68248 system_pods.go:61] "coredns-6f6b679f8f-mf6q4" [a5f7f959-715b-48a1-9f85-f267614182f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:41:37.589377   68248 system_pods.go:61] "coredns-6f6b679f8f-rc947" [3d041322-9d6b-4f46-8f58-e2991f34a297] Running
	I0815 18:41:37.589385   68248 system_pods.go:61] "etcd-embed-certs-555028" [8b533be4-dc0d-4b5e-af13-4efde0ddca33] Running
	I0815 18:41:37.589390   68248 system_pods.go:61] "kube-apiserver-embed-certs-555028" [6cbda2fc-5bf8-42d3-acee-fbf45de39d08] Running
	I0815 18:41:37.589397   68248 system_pods.go:61] "kube-controller-manager-embed-certs-555028" [e1246479-31dd-4437-b32f-4709fa627284] Running
	I0815 18:41:37.589403   68248 system_pods.go:61] "kube-proxy-ktczt" [f5e5b692-edd5-48fd-879b-7b8da4dea9fd] Running
	I0815 18:41:37.589410   68248 system_pods.go:61] "kube-scheduler-embed-certs-555028" [046100d7-8f69-4bff-8d48-c088c27b7601] Running
	I0815 18:41:37.589422   68248 system_pods.go:61] "metrics-server-6867b74b74-zkpx5" [92e18af9-7bd1-4891-b551-06ba4b293560] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:37.589431   68248 system_pods.go:61] "storage-provisioner" [d6979830-492e-4ef7-960f-2d4756de1c8f] Running
	I0815 18:41:37.589439   68248 system_pods.go:74] duration metric: took 7.257758ms to wait for pod list to return data ...
	I0815 18:41:37.589450   68248 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:41:37.592468   68248 default_sa.go:45] found service account: "default"
	I0815 18:41:37.592500   68248 default_sa.go:55] duration metric: took 3.029278ms for default service account to be created ...
	I0815 18:41:37.592511   68248 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:41:37.597697   68248 system_pods.go:86] 9 kube-system pods found
	I0815 18:41:37.597725   68248 system_pods.go:89] "coredns-6f6b679f8f-mf6q4" [a5f7f959-715b-48a1-9f85-f267614182f1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0815 18:41:37.597730   68248 system_pods.go:89] "coredns-6f6b679f8f-rc947" [3d041322-9d6b-4f46-8f58-e2991f34a297] Running
	I0815 18:41:37.597736   68248 system_pods.go:89] "etcd-embed-certs-555028" [8b533be4-dc0d-4b5e-af13-4efde0ddca33] Running
	I0815 18:41:37.597740   68248 system_pods.go:89] "kube-apiserver-embed-certs-555028" [6cbda2fc-5bf8-42d3-acee-fbf45de39d08] Running
	I0815 18:41:37.597744   68248 system_pods.go:89] "kube-controller-manager-embed-certs-555028" [e1246479-31dd-4437-b32f-4709fa627284] Running
	I0815 18:41:37.597747   68248 system_pods.go:89] "kube-proxy-ktczt" [f5e5b692-edd5-48fd-879b-7b8da4dea9fd] Running
	I0815 18:41:37.597751   68248 system_pods.go:89] "kube-scheduler-embed-certs-555028" [046100d7-8f69-4bff-8d48-c088c27b7601] Running
	I0815 18:41:37.597756   68248 system_pods.go:89] "metrics-server-6867b74b74-zkpx5" [92e18af9-7bd1-4891-b551-06ba4b293560] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:41:37.597763   68248 system_pods.go:89] "storage-provisioner" [d6979830-492e-4ef7-960f-2d4756de1c8f] Running
	I0815 18:41:37.597769   68248 system_pods.go:126] duration metric: took 5.252997ms to wait for k8s-apps to be running ...
	I0815 18:41:37.597779   68248 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:41:37.597819   68248 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:41:37.616004   68248 system_svc.go:56] duration metric: took 18.217091ms WaitForService to wait for kubelet
	I0815 18:41:37.616032   68248 kubeadm.go:582] duration metric: took 5.856091444s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:41:37.616049   68248 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:41:37.619195   68248 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:41:37.619215   68248 node_conditions.go:123] node cpu capacity is 2
	I0815 18:41:37.619223   68248 node_conditions.go:105] duration metric: took 3.169759ms to run NodePressure ...
	I0815 18:41:37.619234   68248 start.go:241] waiting for startup goroutines ...
	I0815 18:41:37.619242   68248 start.go:246] waiting for cluster config update ...
	I0815 18:41:37.619253   68248 start.go:255] writing updated cluster config ...
	I0815 18:41:37.619520   68248 ssh_runner.go:195] Run: rm -f paused
	I0815 18:41:37.669469   68248 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:41:37.671485   68248 out.go:177] * Done! kubectl is now configured to use "embed-certs-555028" cluster and "default" namespace by default
	I0815 18:41:34.350702   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:36.849248   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:39.348684   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:41.349379   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:43.848932   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:46.348801   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:48.349736   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:50.848728   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:52.850583   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:57.184855   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:41:57.185437   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:41:57.185667   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:41:54.851200   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:57.349542   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:42:02.186077   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:02.186272   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:41:59.349724   67936 pod_ready.go:103] pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace has status "Ready":"False"
	I0815 18:41:59.349748   67936 pod_ready.go:82] duration metric: took 4m0.007281981s for pod "metrics-server-6867b74b74-djv7r" in "kube-system" namespace to be "Ready" ...
	E0815 18:41:59.349757   67936 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 18:41:59.349763   67936 pod_ready.go:39] duration metric: took 4m1.606987494s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 18:41:59.349779   67936 api_server.go:52] waiting for apiserver process to appear ...
	I0815 18:41:59.349802   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:41:59.349844   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:41:59.395509   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:41:59.395541   67936 cri.go:89] found id: ""
	I0815 18:41:59.395552   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:41:59.395608   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.400063   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:41:59.400140   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:41:59.435356   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:41:59.435379   67936 cri.go:89] found id: ""
	I0815 18:41:59.435386   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:41:59.435431   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.440159   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:41:59.440213   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:41:59.479810   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:41:59.479841   67936 cri.go:89] found id: ""
	I0815 18:41:59.479851   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:41:59.479907   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.484341   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:41:59.484394   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:41:59.521077   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:41:59.521104   67936 cri.go:89] found id: ""
	I0815 18:41:59.521114   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:41:59.521168   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.525075   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:41:59.525131   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:41:59.564058   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:41:59.564084   67936 cri.go:89] found id: ""
	I0815 18:41:59.564093   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:41:59.564150   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.568668   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:41:59.568734   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:41:59.604385   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:41:59.604406   67936 cri.go:89] found id: ""
	I0815 18:41:59.604416   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:41:59.604473   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.609023   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:41:59.609095   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:41:59.646289   67936 cri.go:89] found id: ""
	I0815 18:41:59.646334   67936 logs.go:276] 0 containers: []
	W0815 18:41:59.646346   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:41:59.646355   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:41:59.646422   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:41:59.681861   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:41:59.681889   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:41:59.681895   67936 cri.go:89] found id: ""
	I0815 18:41:59.681903   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:41:59.681951   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.686379   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:41:59.690328   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:41:59.690353   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:41:59.759302   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:41:59.759338   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:41:59.798249   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:41:59.798276   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:41:59.834097   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:41:59.834129   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:41:59.885365   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:41:59.885398   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:41:59.923013   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:41:59.923038   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:41:59.938162   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:41:59.938192   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:00.077340   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:00.077377   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:00.122292   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:00.122323   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:00.165209   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:00.165235   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:00.201278   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:00.201317   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:00.238007   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:00.238042   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:00.709997   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:00.710043   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:03.252351   67936 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:42:03.268074   67936 api_server.go:72] duration metric: took 4m12.770065297s to wait for apiserver process to appear ...
	I0815 18:42:03.268104   67936 api_server.go:88] waiting for apiserver healthz status ...
	I0815 18:42:03.268159   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:42:03.268227   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:42:03.305890   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:03.305913   67936 cri.go:89] found id: ""
	I0815 18:42:03.305923   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:42:03.305981   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.309958   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:42:03.310019   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:42:03.344602   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:03.344630   67936 cri.go:89] found id: ""
	I0815 18:42:03.344639   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:42:03.344696   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.349261   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:42:03.349317   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:42:03.383892   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:03.383912   67936 cri.go:89] found id: ""
	I0815 18:42:03.383919   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:42:03.383968   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.388158   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:42:03.388219   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:42:03.423264   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:03.423293   67936 cri.go:89] found id: ""
	I0815 18:42:03.423303   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:42:03.423352   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.427436   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:42:03.427496   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:42:03.470792   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:03.470819   67936 cri.go:89] found id: ""
	I0815 18:42:03.470829   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:42:03.470890   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.475884   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:42:03.475956   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:42:03.513081   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:03.513103   67936 cri.go:89] found id: ""
	I0815 18:42:03.513110   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:42:03.513161   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.517913   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:42:03.517985   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:42:03.556149   67936 cri.go:89] found id: ""
	I0815 18:42:03.556180   67936 logs.go:276] 0 containers: []
	W0815 18:42:03.556191   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:42:03.556199   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:42:03.556257   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:42:03.595987   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:03.596015   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:03.596021   67936 cri.go:89] found id: ""
	I0815 18:42:03.596030   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:42:03.596112   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.600430   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:03.604422   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:42:03.604443   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:42:03.676629   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:42:03.676665   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:03.717487   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:03.717514   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:03.755606   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:42:03.755632   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:03.815152   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:03.815187   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:03.857853   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:03.857882   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:04.296939   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:42:04.296983   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:42:04.312346   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:42:04.312373   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:04.424132   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:04.424162   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:04.482298   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:04.482326   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:04.526805   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:42:04.526832   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:04.564842   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:42:04.564871   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:04.602297   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:04.602323   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.137972   67936 api_server.go:253] Checking apiserver healthz at https://192.168.72.14:8443/healthz ...
	I0815 18:42:07.143165   67936 api_server.go:279] https://192.168.72.14:8443/healthz returned 200:
	ok
	I0815 18:42:07.144155   67936 api_server.go:141] control plane version: v1.31.0
	I0815 18:42:07.144174   67936 api_server.go:131] duration metric: took 3.876063215s to wait for apiserver health ...
	I0815 18:42:07.144182   67936 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 18:42:07.144201   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:42:07.144243   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:42:07.185685   67936 cri.go:89] found id: "831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:07.185709   67936 cri.go:89] found id: ""
	I0815 18:42:07.185717   67936 logs.go:276] 1 containers: [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f]
	I0815 18:42:07.185782   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.190086   67936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:42:07.190179   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:42:07.233020   67936 cri.go:89] found id: "f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:07.233044   67936 cri.go:89] found id: ""
	I0815 18:42:07.233053   67936 logs.go:276] 1 containers: [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de]
	I0815 18:42:07.233114   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.237639   67936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:42:07.237698   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:42:07.277613   67936 cri.go:89] found id: "ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:07.277642   67936 cri.go:89] found id: ""
	I0815 18:42:07.277652   67936 logs.go:276] 1 containers: [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c]
	I0815 18:42:07.277714   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.282273   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:42:07.282346   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:42:07.324972   67936 cri.go:89] found id: "74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:07.325003   67936 cri.go:89] found id: ""
	I0815 18:42:07.325013   67936 logs.go:276] 1 containers: [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27]
	I0815 18:42:07.325071   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.329402   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:42:07.329470   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:42:07.369812   67936 cri.go:89] found id: "66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:07.369840   67936 cri.go:89] found id: ""
	I0815 18:42:07.369849   67936 logs.go:276] 1 containers: [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791]
	I0815 18:42:07.369902   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.373993   67936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:42:07.374055   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:42:07.412036   67936 cri.go:89] found id: "c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:07.412062   67936 cri.go:89] found id: ""
	I0815 18:42:07.412072   67936 logs.go:276] 1 containers: [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f]
	I0815 18:42:07.412145   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.416191   67936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:42:07.416263   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:42:07.457677   67936 cri.go:89] found id: ""
	I0815 18:42:07.457710   67936 logs.go:276] 0 containers: []
	W0815 18:42:07.457721   67936 logs.go:278] No container was found matching "kindnet"
	I0815 18:42:07.457728   67936 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0815 18:42:07.457792   67936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 18:42:07.498173   67936 cri.go:89] found id: "000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:07.498199   67936 cri.go:89] found id: "1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.498204   67936 cri.go:89] found id: ""
	I0815 18:42:07.498210   67936 logs.go:276] 2 containers: [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420]
	I0815 18:42:07.498268   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.502704   67936 ssh_runner.go:195] Run: which crictl
	I0815 18:42:07.506501   67936 logs.go:123] Gathering logs for kube-scheduler [74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27] ...
	I0815 18:42:07.506520   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74f2072bea476946df3412347a899b65a745e04763ccf428abf80443a4ea2e27"
	I0815 18:42:07.542685   67936 logs.go:123] Gathering logs for kube-proxy [66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791] ...
	I0815 18:42:07.542713   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66df56dcd33cfa09eb7660bbef9bf2514424a2f4616e419791c093791cd24791"
	I0815 18:42:07.584070   67936 logs.go:123] Gathering logs for kube-controller-manager [c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f] ...
	I0815 18:42:07.584097   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4afb41627fd6f9da0beacf54ccfe116443e582763d723b25549969ef1491c1f"
	I0815 18:42:07.634780   67936 logs.go:123] Gathering logs for storage-provisioner [1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420] ...
	I0815 18:42:07.634812   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a53d726afaa5048a6cb201b8e36b79c6c10e313226b8b159161ef82a7f53420"
	I0815 18:42:07.669410   67936 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:42:07.669436   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:42:08.062406   67936 logs.go:123] Gathering logs for dmesg ...
	I0815 18:42:08.062454   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 18:42:08.077171   67936 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:42:08.077209   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 18:42:08.186125   67936 logs.go:123] Gathering logs for etcd [f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de] ...
	I0815 18:42:08.186158   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f93d6e3cca40cbce86a007cc87affdbdc6491bf132aa5cf7a24b1984341588de"
	I0815 18:42:08.229621   67936 logs.go:123] Gathering logs for storage-provisioner [000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75] ...
	I0815 18:42:08.229655   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 000b1f65df4e55bafea8bd11560cfe8ba6a290f9dc4bae68b74cab548f313c75"
	I0815 18:42:08.266791   67936 logs.go:123] Gathering logs for container status ...
	I0815 18:42:08.266818   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:42:08.314172   67936 logs.go:123] Gathering logs for kubelet ...
	I0815 18:42:08.314197   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:42:08.388793   67936 logs.go:123] Gathering logs for kube-apiserver [831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f] ...
	I0815 18:42:08.388837   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 831a14c2b0bb2aa7c1d82309a34458e6cbbfacd0c1b5d44f89a89af04368061f"
	I0815 18:42:08.438287   67936 logs.go:123] Gathering logs for coredns [ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c] ...
	I0815 18:42:08.438317   67936 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba61cbc99841c047e9ff525e94c043230977ca5bef0ea14771be99fd79af3b6c"
	I0815 18:42:10.990845   67936 system_pods.go:59] 8 kube-system pods found
	I0815 18:42:10.990875   67936 system_pods.go:61] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running
	I0815 18:42:10.990879   67936 system_pods.go:61] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running
	I0815 18:42:10.990883   67936 system_pods.go:61] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running
	I0815 18:42:10.990887   67936 system_pods.go:61] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running
	I0815 18:42:10.990890   67936 system_pods.go:61] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running
	I0815 18:42:10.990894   67936 system_pods.go:61] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running
	I0815 18:42:10.990900   67936 system_pods.go:61] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:42:10.990905   67936 system_pods.go:61] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running
	I0815 18:42:10.990913   67936 system_pods.go:74] duration metric: took 3.846725869s to wait for pod list to return data ...
	I0815 18:42:10.990919   67936 default_sa.go:34] waiting for default service account to be created ...
	I0815 18:42:10.993933   67936 default_sa.go:45] found service account: "default"
	I0815 18:42:10.993958   67936 default_sa.go:55] duration metric: took 3.032805ms for default service account to be created ...
	I0815 18:42:10.993968   67936 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 18:42:10.998531   67936 system_pods.go:86] 8 kube-system pods found
	I0815 18:42:10.998553   67936 system_pods.go:89] "coredns-6f6b679f8f-kpq9m" [9592b56d-a037-4212-86f2-29e5824626fc] Running
	I0815 18:42:10.998558   67936 system_pods.go:89] "etcd-no-preload-599042" [74c43f11-eaa7-49fa-b233-02cf999e6ca3] Running
	I0815 18:42:10.998562   67936 system_pods.go:89] "kube-apiserver-no-preload-599042" [2693c62c-f0c8-4afe-9674-2f85250d4b79] Running
	I0815 18:42:10.998567   67936 system_pods.go:89] "kube-controller-manager-no-preload-599042" [17ef4b83-1265-4fd2-ac41-731a2b9a994d] Running
	I0815 18:42:10.998570   67936 system_pods.go:89] "kube-proxy-bwb9h" [5f286e9d-3035-4280-adff-d3ca5653c2f8] Running
	I0815 18:42:10.998575   67936 system_pods.go:89] "kube-scheduler-no-preload-599042" [42bda204-93c9-41cf-95b4-7b95c200c592] Running
	I0815 18:42:10.998582   67936 system_pods.go:89] "metrics-server-6867b74b74-djv7r" [3d03d5bc-31ed-4a75-8d75-627d40a2d8fc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 18:42:10.998586   67936 system_pods.go:89] "storage-provisioner" [593f1bd8-17e0-471e-849c-d62d6ed5b14e] Running
	I0815 18:42:10.998592   67936 system_pods.go:126] duration metric: took 4.619003ms to wait for k8s-apps to be running ...
	I0815 18:42:10.998598   67936 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 18:42:10.998638   67936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:42:11.015236   67936 system_svc.go:56] duration metric: took 16.627802ms WaitForService to wait for kubelet
	I0815 18:42:11.015260   67936 kubeadm.go:582] duration metric: took 4m20.517256799s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 18:42:11.015280   67936 node_conditions.go:102] verifying NodePressure condition ...
	I0815 18:42:11.018544   67936 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0815 18:42:11.018570   67936 node_conditions.go:123] node cpu capacity is 2
	I0815 18:42:11.018584   67936 node_conditions.go:105] duration metric: took 3.298753ms to run NodePressure ...
	I0815 18:42:11.018598   67936 start.go:241] waiting for startup goroutines ...
	I0815 18:42:11.018611   67936 start.go:246] waiting for cluster config update ...
	I0815 18:42:11.018626   67936 start.go:255] writing updated cluster config ...
	I0815 18:42:11.018907   67936 ssh_runner.go:195] Run: rm -f paused
	I0815 18:42:11.065371   67936 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 18:42:11.067513   67936 out.go:177] * Done! kubectl is now configured to use "no-preload-599042" cluster and "default" namespace by default
	I0815 18:42:12.186839   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:12.187041   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:42:32.187938   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:42:32.188123   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.189799   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:43:12.190012   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:43:12.190023   68713 kubeadm.go:310] 
	I0815 18:43:12.190075   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:43:12.190133   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:43:12.190148   68713 kubeadm.go:310] 
	I0815 18:43:12.190205   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:43:12.190265   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:43:12.190394   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:43:12.190403   68713 kubeadm.go:310] 
	I0815 18:43:12.190523   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:43:12.190571   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:43:12.190627   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:43:12.190636   68713 kubeadm.go:310] 
	I0815 18:43:12.190772   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:43:12.190928   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:43:12.190950   68713 kubeadm.go:310] 
	I0815 18:43:12.191108   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:43:12.191218   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:43:12.191344   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:43:12.191478   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:43:12.191504   68713 kubeadm.go:310] 
	I0815 18:43:12.192283   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:43:12.192421   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:43:12.192523   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0815 18:43:12.192655   68713 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0815 18:43:12.192699   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0815 18:43:12.658571   68713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:43:12.675797   68713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 18:43:12.687340   68713 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 18:43:12.687370   68713 kubeadm.go:157] found existing configuration files:
	
	I0815 18:43:12.687422   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 18:43:12.698401   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 18:43:12.698464   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 18:43:12.709632   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 18:43:12.720330   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 18:43:12.720386   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 18:43:12.731593   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.742122   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 18:43:12.742185   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 18:43:12.753042   68713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 18:43:12.762799   68713 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 18:43:12.762855   68713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 18:43:12.772788   68713 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0815 18:43:12.987927   68713 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 18:45:08.956975   68713 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0815 18:45:08.957069   68713 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0815 18:45:08.958834   68713 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0815 18:45:08.958904   68713 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 18:45:08.958993   68713 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 18:45:08.959133   68713 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 18:45:08.959280   68713 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0815 18:45:08.959376   68713 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 18:45:08.961205   68713 out.go:235]   - Generating certificates and keys ...
	I0815 18:45:08.961294   68713 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 18:45:08.961352   68713 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 18:45:08.961424   68713 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0815 18:45:08.961475   68713 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0815 18:45:08.961536   68713 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0815 18:45:08.961581   68713 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0815 18:45:08.961637   68713 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0815 18:45:08.961689   68713 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0815 18:45:08.961795   68713 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0815 18:45:08.961910   68713 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0815 18:45:08.961971   68713 kubeadm.go:310] [certs] Using the existing "sa" key
	I0815 18:45:08.962030   68713 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 18:45:08.962078   68713 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 18:45:08.962127   68713 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 18:45:08.962214   68713 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 18:45:08.962316   68713 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 18:45:08.962448   68713 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 18:45:08.962565   68713 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 18:45:08.962626   68713 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 18:45:08.962724   68713 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 18:45:08.964403   68713 out.go:235]   - Booting up control plane ...
	I0815 18:45:08.964526   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 18:45:08.964631   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 18:45:08.964736   68713 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 18:45:08.964866   68713 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 18:45:08.965043   68713 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0815 18:45:08.965121   68713 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0815 18:45:08.965225   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965418   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965508   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965703   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965766   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.965919   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.965981   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966140   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966200   68713 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0815 18:45:08.966381   68713 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0815 18:45:08.966389   68713 kubeadm.go:310] 
	I0815 18:45:08.966438   68713 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0815 18:45:08.966473   68713 kubeadm.go:310] 		timed out waiting for the condition
	I0815 18:45:08.966481   68713 kubeadm.go:310] 
	I0815 18:45:08.966533   68713 kubeadm.go:310] 	This error is likely caused by:
	I0815 18:45:08.966580   68713 kubeadm.go:310] 		- The kubelet is not running
	I0815 18:45:08.966711   68713 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0815 18:45:08.966718   68713 kubeadm.go:310] 
	I0815 18:45:08.966844   68713 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0815 18:45:08.966900   68713 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0815 18:45:08.966948   68713 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0815 18:45:08.966958   68713 kubeadm.go:310] 
	I0815 18:45:08.967082   68713 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0815 18:45:08.967201   68713 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0815 18:45:08.967214   68713 kubeadm.go:310] 
	I0815 18:45:08.967341   68713 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0815 18:45:08.967450   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0815 18:45:08.967548   68713 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0815 18:45:08.967646   68713 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0815 18:45:08.967678   68713 kubeadm.go:310] 
	I0815 18:45:08.967716   68713 kubeadm.go:394] duration metric: took 7m56.388213745s to StartCluster
	I0815 18:45:08.967768   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0815 18:45:08.967834   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 18:45:09.013913   68713 cri.go:89] found id: ""
	I0815 18:45:09.013943   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.013954   68713 logs.go:278] No container was found matching "kube-apiserver"
	I0815 18:45:09.013961   68713 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0815 18:45:09.014030   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 18:45:09.051370   68713 cri.go:89] found id: ""
	I0815 18:45:09.051395   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.051403   68713 logs.go:278] No container was found matching "etcd"
	I0815 18:45:09.051409   68713 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0815 18:45:09.051477   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 18:45:09.086615   68713 cri.go:89] found id: ""
	I0815 18:45:09.086646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.086653   68713 logs.go:278] No container was found matching "coredns"
	I0815 18:45:09.086659   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0815 18:45:09.086708   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 18:45:09.122335   68713 cri.go:89] found id: ""
	I0815 18:45:09.122370   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.122381   68713 logs.go:278] No container was found matching "kube-scheduler"
	I0815 18:45:09.122389   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0815 18:45:09.122453   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 18:45:09.163207   68713 cri.go:89] found id: ""
	I0815 18:45:09.163232   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.163241   68713 logs.go:278] No container was found matching "kube-proxy"
	I0815 18:45:09.163247   68713 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 18:45:09.163308   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 18:45:09.199396   68713 cri.go:89] found id: ""
	I0815 18:45:09.199426   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.199437   68713 logs.go:278] No container was found matching "kube-controller-manager"
	I0815 18:45:09.199444   68713 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0815 18:45:09.199504   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 18:45:09.235073   68713 cri.go:89] found id: ""
	I0815 18:45:09.235101   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.235112   68713 logs.go:278] No container was found matching "kindnet"
	I0815 18:45:09.235120   68713 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 18:45:09.235180   68713 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 18:45:09.271614   68713 cri.go:89] found id: ""
	I0815 18:45:09.271646   68713 logs.go:276] 0 containers: []
	W0815 18:45:09.271659   68713 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0815 18:45:09.271671   68713 logs.go:123] Gathering logs for describe nodes ...
	I0815 18:45:09.271686   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0815 18:45:09.372192   68713 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0815 18:45:09.372214   68713 logs.go:123] Gathering logs for CRI-O ...
	I0815 18:45:09.372231   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0815 18:45:09.496743   68713 logs.go:123] Gathering logs for container status ...
	I0815 18:45:09.496780   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 18:45:09.540434   68713 logs.go:123] Gathering logs for kubelet ...
	I0815 18:45:09.540471   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0815 18:45:09.595546   68713 logs.go:123] Gathering logs for dmesg ...
	I0815 18:45:09.595584   68713 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0815 18:45:09.609831   68713 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0815 18:45:09.609885   68713 out.go:270] * 
	W0815 18:45:09.609942   68713 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.609956   68713 out.go:270] * 
	W0815 18:45:09.610794   68713 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 18:45:09.614213   68713 out.go:201] 
	W0815 18:45:09.615379   68713 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0815 18:45:09.615420   68713 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0815 18:45:09.615437   68713 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0815 18:45:09.616840   68713 out.go:201] 
	
	
	==> CRI-O <==
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.451962308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748155451936408,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=78549ea2-e0e4-4e54-8358-2258f7536733 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.452674665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e1ea103b-5852-4616-a011-9be025fe12c4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.452727430Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e1ea103b-5852-4616-a011-9be025fe12c4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.452757821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e1ea103b-5852-4616-a011-9be025fe12c4 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.481982901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c7e2e76-cf0d-44d1-89bd-96cb1152d5a8 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.482122903Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c7e2e76-cf0d-44d1-89bd-96cb1152d5a8 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.483644468Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6fca5538-3365-4f8b-8b73-5d923512e443 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.484032674Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748155484009830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6fca5538-3365-4f8b-8b73-5d923512e443 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.484618243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c52d1c7b-825a-42c3-ac2e-ae9be30ed825 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.484687879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c52d1c7b-825a-42c3-ac2e-ae9be30ed825 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.484725469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c52d1c7b-825a-42c3-ac2e-ae9be30ed825 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.517052252Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=949a32e0-1291-4284-84de-16c197c18c87 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.517182138Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=949a32e0-1291-4284-84de-16c197c18c87 name=/runtime.v1.RuntimeService/Version
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.518695044Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5638ffa9-ced4-4c52-9ed1-2b992b49683e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.519084185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748155519064821,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5638ffa9-ced4-4c52-9ed1-2b992b49683e name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.519635324Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3150352-875e-43ea-b17a-aa1b9dca161c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.519706229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3150352-875e-43ea-b17a-aa1b9dca161c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.519742382Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d3150352-875e-43ea-b17a-aa1b9dca161c name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.553853425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79f77442-d65e-4a36-b82d-0274200e16ee name=/runtime.v1.RuntimeService/Version
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.553954418Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79f77442-d65e-4a36-b82d-0274200e16ee name=/runtime.v1.RuntimeService/Version
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.555029016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c943349-6419-4911-87db-f0bdc3bc9d62 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.555415369Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1723748155555390236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c943349-6419-4911-87db-f0bdc3bc9d62 name=/runtime.v1.ImageService/ImageFsInfo
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.556140368Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c0b7182a-5f5e-4f21-b463-7875bd9cf5b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.556216359Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c0b7182a-5f5e-4f21-b463-7875bd9cf5b8 name=/runtime.v1.RuntimeService/ListContainers
	Aug 15 18:55:55 old-k8s-version-278865 crio[649]: time="2024-08-15 18:55:55.556254602Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c0b7182a-5f5e-4f21-b463-7875bd9cf5b8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Aug15 18:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.055068] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040001] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.968285] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.579604] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.625301] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000012] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug15 18:37] systemd-fstab-generator[572]: Ignoring "noauto" option for root device
	[  +0.058621] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064012] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.191090] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.131642] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.264819] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +6.501610] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +0.065792] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.624202] systemd-fstab-generator[1024]: Ignoring "noauto" option for root device
	[ +13.041505] kauditd_printk_skb: 46 callbacks suppressed
	[Aug15 18:41] systemd-fstab-generator[5085]: Ignoring "noauto" option for root device
	[Aug15 18:43] systemd-fstab-generator[5373]: Ignoring "noauto" option for root device
	[  +0.068065] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 18:55:55 up 19 min,  0 users,  load average: 0.00, 0.03, 0.05
	Linux old-k8s-version-278865 5.10.207 #1 SMP Wed Aug 14 19:18:01 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0009a69d4, 0xc000cc4a00)
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]: goroutine 111 [syscall]:
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]: syscall.Syscall6(0xe8, 0xd, 0xc000ad9b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0xaf, 0xbd, 0x0)
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xd, 0xc000ad9b6c, 0x7, 0x7, 0xffffffffffffffff, 0x1000000016c, 0x0, 0x0)
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000c3f840, 0x10900000000, 0x10000000100, 0x1)
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000c53900)
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6803]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Aug 15 18:55:53 old-k8s-version-278865 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Aug 15 18:55:53 old-k8s-version-278865 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Aug 15 18:55:53 old-k8s-version-278865 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 132.
	Aug 15 18:55:53 old-k8s-version-278865 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Aug 15 18:55:53 old-k8s-version-278865 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6812]: I0815 18:55:53.907382    6812 server.go:416] Version: v1.20.0
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6812]: I0815 18:55:53.907864    6812 server.go:837] Client rotation is on, will bootstrap in background
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6812]: I0815 18:55:53.910065    6812 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6812]: W0815 18:55:53.911422    6812 manager.go:159] Cannot detect current cgroup on cgroup v2
	Aug 15 18:55:53 old-k8s-version-278865 kubelet[6812]: I0815 18:55:53.911471    6812 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-278865 -n old-k8s-version-278865
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-278865 -n old-k8s-version-278865: exit status 2 (225.398244ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-278865" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (100.55s)

                                                
                                    

Test pass (242/312)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 57.84
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 16.9
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
22 TestOffline 54.45
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 206.63
31 TestAddons/serial/GCPAuth/Namespaces 0.14
33 TestAddons/parallel/Registry 16.78
35 TestAddons/parallel/InspektorGadget 12.05
37 TestAddons/parallel/HelmTiller 12.45
39 TestAddons/parallel/CSI 49.86
40 TestAddons/parallel/Headlamp 70.56
41 TestAddons/parallel/CloudSpanner 6.57
42 TestAddons/parallel/LocalPath 55.3
43 TestAddons/parallel/NvidiaDevicePlugin 5.56
44 TestAddons/parallel/Yakd 12.12
46 TestCertOptions 73.04
47 TestCertExpiration 286.12
49 TestForceSystemdFlag 85.26
50 TestForceSystemdEnv 48.15
52 TestKVMDriverInstallOrUpdate 5.21
56 TestErrorSpam/setup 41.39
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.71
59 TestErrorSpam/pause 1.54
60 TestErrorSpam/unpause 1.7
61 TestErrorSpam/stop 4.77
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 47.85
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 53.64
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
73 TestFunctional/serial/CacheCmd/cache/add_local 2.24
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 367.49
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 1.11
84 TestFunctional/serial/LogsFileCmd 1.1
85 TestFunctional/serial/InvalidService 3.96
87 TestFunctional/parallel/ConfigCmd 0.3
88 TestFunctional/parallel/DashboardCmd 14.61
89 TestFunctional/parallel/DryRun 0.26
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.79
95 TestFunctional/parallel/ServiceCmdConnect 10.84
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 46.01
99 TestFunctional/parallel/SSHCmd 0.48
100 TestFunctional/parallel/CpCmd 1.21
101 TestFunctional/parallel/MySQL 23.27
102 TestFunctional/parallel/FileSync 0.23
103 TestFunctional/parallel/CertSync 1.42
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.46
111 TestFunctional/parallel/License 0.63
112 TestFunctional/parallel/ServiceCmd/DeployApp 11.2
113 TestFunctional/parallel/Version/short 0.04
114 TestFunctional/parallel/Version/components 0.68
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
119 TestFunctional/parallel/ImageCommands/ImageBuild 4.19
120 TestFunctional/parallel/ImageCommands/Setup 2
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.99
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.84
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.06
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.77
129 TestFunctional/parallel/ServiceCmd/List 0.32
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.78
133 TestFunctional/parallel/ServiceCmd/Format 0.37
135 TestFunctional/parallel/ServiceCmd/URL 0.36
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
146 TestFunctional/parallel/ProfileCmd/profile_list 0.31
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.25
148 TestFunctional/parallel/MountCmd/any-port 12.82
149 TestFunctional/parallel/MountCmd/specific-port 1.99
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
151 TestFunctional/delete_echo-server_images 0.04
152 TestFunctional/delete_my-image_image 0.01
153 TestFunctional/delete_minikube_cached_images 0.01
157 TestMultiControlPlane/serial/StartCluster 253.13
158 TestMultiControlPlane/serial/DeployApp 7.47
159 TestMultiControlPlane/serial/PingHostFromPods 1.17
160 TestMultiControlPlane/serial/AddWorkerNode 56.65
161 TestMultiControlPlane/serial/NodeLabels 0.06
162 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.52
163 TestMultiControlPlane/serial/CopyFile 12.26
165 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.46
167 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.37
169 TestMultiControlPlane/serial/DeleteSecondaryNode 16.47
170 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.36
172 TestMultiControlPlane/serial/RestartCluster 355.83
173 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
174 TestMultiControlPlane/serial/AddSecondaryNode 79.03
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
179 TestJSONOutput/start/Command 87.67
180 TestJSONOutput/start/Audit 0
182 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/pause/Command 0.72
186 TestJSONOutput/pause/Audit 0
188 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/unpause/Command 0.62
192 TestJSONOutput/unpause/Audit 0
194 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/stop/Command 7.33
198 TestJSONOutput/stop/Audit 0
200 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
202 TestErrorJSONOutput 0.19
207 TestMainNoArgs 0.04
208 TestMinikubeProfile 92.23
211 TestMountStart/serial/StartWithMountFirst 29.53
212 TestMountStart/serial/VerifyMountFirst 0.36
213 TestMountStart/serial/StartWithMountSecond 27
214 TestMountStart/serial/VerifyMountSecond 0.36
215 TestMountStart/serial/DeleteFirst 0.66
216 TestMountStart/serial/VerifyMountPostDelete 0.36
217 TestMountStart/serial/Stop 1.26
218 TestMountStart/serial/RestartStopped 22.19
219 TestMountStart/serial/VerifyMountPostStop 0.37
222 TestMultiNode/serial/FreshStart2Nodes 113.4
223 TestMultiNode/serial/DeployApp2Nodes 5.92
224 TestMultiNode/serial/PingHostFrom2Pods 0.77
225 TestMultiNode/serial/AddNode 50
226 TestMultiNode/serial/MultiNodeLabels 0.06
227 TestMultiNode/serial/ProfileList 0.21
228 TestMultiNode/serial/CopyFile 6.98
229 TestMultiNode/serial/StopNode 2.29
230 TestMultiNode/serial/StartAfterStop 40.05
232 TestMultiNode/serial/DeleteNode 1.96
234 TestMultiNode/serial/RestartMultiNode 207.91
235 TestMultiNode/serial/ValidateNameConflict 40.39
242 TestScheduledStopUnix 113.82
246 TestRunningBinaryUpgrade 197.45
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
252 TestNoKubernetes/serial/StartWithK8s 90.77
261 TestPause/serial/Start 133.68
262 TestNoKubernetes/serial/StartWithStopK8s 39.66
263 TestNoKubernetes/serial/Start 28.6
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
265 TestNoKubernetes/serial/ProfileList 15.6
266 TestNoKubernetes/serial/Stop 1.33
267 TestNoKubernetes/serial/StartNoArgs 24.4
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
277 TestNetworkPlugins/group/false 2.92
281 TestStoppedBinaryUpgrade/Setup 2.57
282 TestStoppedBinaryUpgrade/Upgrade 117.91
285 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
287 TestStartStop/group/no-preload/serial/FirstStart 73.67
289 TestStartStop/group/embed-certs/serial/FirstStart 108.86
290 TestStartStop/group/no-preload/serial/DeployApp 11.34
292 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.7
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.96
295 TestStartStop/group/embed-certs/serial/DeployApp 9.31
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.96
298 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.99
304 TestStartStop/group/no-preload/serial/SecondStart 642.37
306 TestStartStop/group/embed-certs/serial/SecondStart 573.14
308 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 539.56
309 TestStartStop/group/old-k8s-version/serial/Stop 6.29
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
321 TestStartStop/group/newest-cni/serial/FirstStart 47.83
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.13
324 TestStartStop/group/newest-cni/serial/Stop 7.34
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/newest-cni/serial/SecondStart 36.7
327 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
328 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
329 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
330 TestStartStop/group/newest-cni/serial/Pause 2.61
331 TestNetworkPlugins/group/auto/Start 86.37
332 TestNetworkPlugins/group/kindnet/Start 90.26
333 TestNetworkPlugins/group/calico/Start 129.33
334 TestNetworkPlugins/group/custom-flannel/Start 76.33
335 TestNetworkPlugins/group/auto/KubeletFlags 0.2
336 TestNetworkPlugins/group/auto/NetCatPod 11.26
337 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
338 TestNetworkPlugins/group/kindnet/KubeletFlags 0.2
339 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
340 TestNetworkPlugins/group/auto/DNS 0.21
341 TestNetworkPlugins/group/auto/Localhost 0.17
342 TestNetworkPlugins/group/auto/HairPin 0.15
343 TestNetworkPlugins/group/kindnet/DNS 0.31
344 TestNetworkPlugins/group/kindnet/Localhost 0.16
345 TestNetworkPlugins/group/kindnet/HairPin 0.17
346 TestNetworkPlugins/group/enable-default-cni/Start 83.73
347 TestNetworkPlugins/group/flannel/Start 94.03
348 TestNetworkPlugins/group/calico/ControllerPod 6.01
349 TestNetworkPlugins/group/calico/KubeletFlags 0.21
350 TestNetworkPlugins/group/calico/NetCatPod 11.24
351 TestNetworkPlugins/group/calico/DNS 0.25
352 TestNetworkPlugins/group/calico/Localhost 0.16
353 TestNetworkPlugins/group/calico/HairPin 0.19
354 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
355 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
356 TestNetworkPlugins/group/custom-flannel/DNS 0.18
357 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
358 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
359 TestNetworkPlugins/group/bridge/Start 60.41
360 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
361 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.31
362 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
363 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
364 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
365 TestNetworkPlugins/group/flannel/ControllerPod 6.01
366 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
367 TestNetworkPlugins/group/flannel/NetCatPod 11.24
368 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
369 TestNetworkPlugins/group/bridge/NetCatPod 11.24
370 TestNetworkPlugins/group/flannel/DNS 0.16
371 TestNetworkPlugins/group/flannel/Localhost 0.12
372 TestNetworkPlugins/group/flannel/HairPin 0.12
373 TestNetworkPlugins/group/bridge/DNS 0.14
374 TestNetworkPlugins/group/bridge/Localhost 0.12
375 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (57.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-709194 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-709194 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (57.840946173s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (57.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-709194
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-709194: exit status 85 (53.703948ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-709194 | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |          |
	|         | -p download-only-709194        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:05:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:05:09.091808   20231 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:09.092068   20231 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:09.092077   20231 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:09.092082   20231 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:09.092292   20231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	W0815 17:05:09.092413   20231 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19450-13013/.minikube/config/config.json: open /home/jenkins/minikube-integration/19450-13013/.minikube/config/config.json: no such file or directory
	I0815 17:05:09.092964   20231 out.go:352] Setting JSON to true
	I0815 17:05:09.093870   20231 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2855,"bootTime":1723738654,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:05:09.093929   20231 start.go:139] virtualization: kvm guest
	I0815 17:05:09.096225   20231 out.go:97] [download-only-709194] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0815 17:05:09.096318   20231 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball: no such file or directory
	I0815 17:05:09.096353   20231 notify.go:220] Checking for updates...
	I0815 17:05:09.097625   20231 out.go:169] MINIKUBE_LOCATION=19450
	I0815 17:05:09.098854   20231 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:09.100084   20231 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:05:09.101351   20231 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:05:09.102607   20231 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0815 17:05:09.105262   20231 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 17:05:09.105481   20231 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:09.202026   20231 out.go:97] Using the kvm2 driver based on user configuration
	I0815 17:05:09.202061   20231 start.go:297] selected driver: kvm2
	I0815 17:05:09.202072   20231 start.go:901] validating driver "kvm2" against <nil>
	I0815 17:05:09.202401   20231 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:05:09.202529   20231 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 17:05:09.217437   20231 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 17:05:09.217510   20231 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:05:09.218011   20231 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0815 17:05:09.218272   20231 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 17:05:09.218330   20231 cni.go:84] Creating CNI manager for ""
	I0815 17:05:09.218343   20231 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 17:05:09.218350   20231 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:05:09.218405   20231 start.go:340] cluster config:
	{Name:download-only-709194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-709194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:09.218582   20231 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:05:09.220570   20231 out.go:97] Downloading VM boot image ...
	I0815 17:05:09.220607   20231 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19450-13013/.minikube/cache/iso/amd64/minikube-v1.33.1-1723650137-19443-amd64.iso
	I0815 17:05:23.931065   20231 out.go:97] Starting "download-only-709194" primary control-plane node in "download-only-709194" cluster
	I0815 17:05:23.931092   20231 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 17:05:24.042726   20231 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:05:24.042755   20231 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:24.042891   20231 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 17:05:24.044572   20231 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0815 17:05:24.044594   20231 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 17:05:24.164435   20231 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:05:38.085102   20231 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 17:05:38.085200   20231 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 17:05:38.984617   20231 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0815 17:05:38.984942   20231 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/download-only-709194/config.json ...
	I0815 17:05:38.984969   20231 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/download-only-709194/config.json: {Name:mk18f81329f8d530f941f0956b52ab355bf11deb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:38.985119   20231 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0815 17:05:38.985282   20231 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19450-13013/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-709194 host does not exist
	  To start a cluster, run: "minikube start -p download-only-709194"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-709194
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (16.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-379390 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-379390 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (16.895644081s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (16.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-379390
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-379390: exit status 85 (56.908185ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-709194 | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | -p download-only-709194        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC | 15 Aug 24 17:06 UTC |
	| delete  | -p download-only-709194        | download-only-709194 | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC | 15 Aug 24 17:06 UTC |
	| start   | -o=json --download-only        | download-only-379390 | jenkins | v1.33.1 | 15 Aug 24 17:06 UTC |                     |
	|         | -p download-only-379390        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:06:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:06:07.234088   20603 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:06:07.234207   20603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:06:07.234216   20603 out.go:358] Setting ErrFile to fd 2...
	I0815 17:06:07.234220   20603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:06:07.234385   20603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:06:07.234904   20603 out.go:352] Setting JSON to true
	I0815 17:06:07.235749   20603 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2913,"bootTime":1723738654,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:06:07.235808   20603 start.go:139] virtualization: kvm guest
	I0815 17:06:07.237867   20603 out.go:97] [download-only-379390] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:06:07.238045   20603 notify.go:220] Checking for updates...
	I0815 17:06:07.239515   20603 out.go:169] MINIKUBE_LOCATION=19450
	I0815 17:06:07.241053   20603 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:06:07.242506   20603 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:06:07.243782   20603 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:06:07.245163   20603 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0815 17:06:07.247573   20603 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 17:06:07.247773   20603 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:06:07.278860   20603 out.go:97] Using the kvm2 driver based on user configuration
	I0815 17:06:07.278886   20603 start.go:297] selected driver: kvm2
	I0815 17:06:07.278898   20603 start.go:901] validating driver "kvm2" against <nil>
	I0815 17:06:07.279212   20603 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:06:07.279300   20603 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19450-13013/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0815 17:06:07.293532   20603 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0815 17:06:07.293592   20603 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:06:07.294047   20603 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0815 17:06:07.294211   20603 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 17:06:07.294271   20603 cni.go:84] Creating CNI manager for ""
	I0815 17:06:07.294283   20603 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0815 17:06:07.294290   20603 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0815 17:06:07.294350   20603 start.go:340] cluster config:
	{Name:download-only-379390 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-379390 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:06:07.294445   20603 iso.go:125] acquiring lock: {Name:mk7679adb3d429c01d170a7f2d45922a687c8479 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:06:07.296294   20603 out.go:97] Starting "download-only-379390" primary control-plane node in "download-only-379390" cluster
	I0815 17:06:07.296318   20603 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:06:07.470710   20603 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	I0815 17:06:07.470743   20603 cache.go:56] Caching tarball of preloaded images
	I0815 17:06:07.470901   20603 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime crio
	I0815 17:06:07.472864   20603 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0815 17:06:07.472884   20603 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4 ...
	I0815 17:06:07.582091   20603 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:4a2ae163f7665ceaa95dee8ffc8efdba -> /home/jenkins/minikube-integration/19450-13013/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-379390 host does not exist
	  To start a cluster, run: "minikube start -p download-only-379390"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-379390
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-174247 --alsologtostderr --binary-mirror http://127.0.0.1:41239 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-174247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-174247
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (54.45s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-681307 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-681307 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (53.258345281s)
helpers_test.go:175: Cleaning up "offline-crio-681307" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-681307
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-681307: (1.190171092s)
--- PASS: TestOffline (54.45s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-973562
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-973562: exit status 85 (50.062355ms)

                                                
                                                
-- stdout --
	* Profile "addons-973562" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-973562"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-973562
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-973562: exit status 85 (51.213266ms)

                                                
                                                
-- stdout --
	* Profile "addons-973562" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-973562"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (206.63s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-973562 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-973562 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m26.626068072s)
--- PASS: TestAddons/Setup (206.63s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-973562 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-973562 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 5.830321ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-svjjj" [c96c1884-ddbb-4955-b9b8-6c11e6a0e893] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004807031s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mjdz8" [e4645394-eb8e-49e3-bab8-fb41e2aaebdf] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004298087s
addons_test.go:342: (dbg) Run:  kubectl --context addons-973562 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-973562 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-973562 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.869018921s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 ip
2024/08/15 17:10:27 [DEBUG] GET http://192.168.39.200:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.05s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-w8hf7" [204d35ea-fbe7-4d4d-b1db-e6c3a228aa7f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005097101s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-973562
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-973562: (6.048405391s)
--- PASS: TestAddons/parallel/InspektorGadget (12.05s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.45s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.504983ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-4z6lg" [e1606621-5c24-447f-bc36-4b807d48e67a] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.008915247s
addons_test.go:475: (dbg) Run:  kubectl --context addons-973562 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-973562 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.835441713s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.45s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.556896ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-973562 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-973562 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [77d36069-ad7c-49d7-a54b-cfe098fdb78d] Pending
helpers_test.go:344: "task-pv-pod" [77d36069-ad7c-49d7-a54b-cfe098fdb78d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [77d36069-ad7c-49d7-a54b-cfe098fdb78d] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.011886896s
addons_test.go:590: (dbg) Run:  kubectl --context addons-973562 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-973562 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-973562 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-973562 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-973562 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-973562 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-973562 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [4e85efbf-f757-46ac-8734-7ce312bd7f3e] Pending
helpers_test.go:344: "task-pv-pod-restore" [4e85efbf-f757-46ac-8734-7ce312bd7f3e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [4e85efbf-f757-46ac-8734-7ce312bd7f3e] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004597371s
addons_test.go:632: (dbg) Run:  kubectl --context addons-973562 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-973562 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-973562 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-973562 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.894435511s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-amd64 -p addons-973562 addons disable volumesnapshots --alsologtostderr -v=1: (1.177902171s)
--- PASS: TestAddons/parallel/CSI (49.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (70.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-973562 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-973562 --alsologtostderr -v=1: (1.314099901s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-lt6rm" [7838ea9e-895e-43bc-8be4-9f0d98616812] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-lt6rm" [7838ea9e-895e-43bc-8be4-9f0d98616812] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-lt6rm" [7838ea9e-895e-43bc-8be4-9f0d98616812] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 1m9.003864842s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (70.56s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-hl5rh" [bf483d88-0db4-4987-b1fd-26c7162cdccb] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004705252s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-973562
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.3s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-973562 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-973562 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-973562 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [11a24f23-bf40-4462-9b4a-828eecff4e86] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [11a24f23-bf40-4462-9b4a-828eecff4e86] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [11a24f23-bf40-4462-9b4a-828eecff4e86] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.005229183s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-973562 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 ssh "cat /opt/local-path-provisioner/pvc-a475e29f-cfc6-4625-8bed-59ac85b175a1_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-973562 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-973562 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-973562 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.522764031s)
--- PASS: TestAddons/parallel/LocalPath (55.30s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-9rkx2" [4d297fcf-2d70-4adb-b547-f8b1dbe59d7b] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004620531s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-973562
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-7rjv4" [a53e3efb-7b29-4fb4-92ac-86d3d4923d80] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004216404s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-973562 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-973562 addons disable yakd --alsologtostderr -v=1: (6.111655346s)
--- PASS: TestAddons/parallel/Yakd (12.12s)

                                                
                                    
x
+
TestCertOptions (73.04s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-194487 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0815 18:24:52.218231   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-194487 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m11.636996466s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-194487 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-194487 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-194487 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-194487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-194487
--- PASS: TestCertOptions (73.04s)

                                                
                                    
x
+
TestCertExpiration (286.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-003860 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-003860 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (59.425146507s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-003860 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-003860 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (45.877869635s)
helpers_test.go:175: Cleaning up "cert-expiration-003860" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-003860
--- PASS: TestCertExpiration (286.12s)

                                                
                                    
x
+
TestForceSystemdFlag (85.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-975168 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-975168 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m24.080881651s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-975168 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-975168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-975168
--- PASS: TestForceSystemdFlag (85.26s)

                                                
                                    
x
+
TestForceSystemdEnv (48.15s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-618999 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-618999 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (47.156841457s)
helpers_test.go:175: Cleaning up "force-systemd-env-618999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-618999
--- PASS: TestForceSystemdEnv (48.15s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.21s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.21s)

                                                
                                    
x
+
TestErrorSpam/setup (41.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-892634 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-892634 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-892634 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-892634 --driver=kvm2  --container-runtime=crio: (41.390391471s)
--- PASS: TestErrorSpam/setup (41.39s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 status
--- PASS: TestErrorSpam/status (0.71s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.7s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 unpause
--- PASS: TestErrorSpam/unpause (1.70s)

                                                
                                    
x
+
TestErrorSpam/stop (4.77s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 stop: (1.607403481s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 stop: (1.562481091s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-892634 --log_dir /tmp/nospam-892634 stop: (1.599745048s)
--- PASS: TestErrorSpam/stop (4.77s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19450-13013/.minikube/files/etc/test/nested/copy/20219/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.85s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773344 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0815 17:19:52.218640   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:52.225739   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:52.237102   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:52.258448   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:52.299841   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:52.381335   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:52.542879   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:52.864590   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:53.506669   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:54.788238   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:57.350336   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:20:02.472368   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:20:12.714651   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-773344 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (47.849416952s)
--- PASS: TestFunctional/serial/StartWithProxy (47.85s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (53.64s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773344 --alsologtostderr -v=8
E0815 17:20:33.197035   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:21:14.159426   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-773344 --alsologtostderr -v=8: (53.640984449s)
functional_test.go:663: soft start took 53.641604121s for "functional-773344" cluster.
--- PASS: TestFunctional/serial/SoftStart (53.64s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-773344 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-773344 cache add registry.k8s.io/pause:3.1: (1.174762358s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-773344 cache add registry.k8s.io/pause:3.3: (1.237233608s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-773344 cache add registry.k8s.io/pause:latest: (1.069365092s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-773344 /tmp/TestFunctionalserialCacheCmdcacheadd_local3422663408/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 cache add minikube-local-cache-test:functional-773344
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-773344 cache add minikube-local-cache-test:functional-773344: (1.931739337s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 cache delete minikube-local-cache-test:functional-773344
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-773344
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773344 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (203.08477ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 kubectl -- --context functional-773344 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-773344 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (367.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773344 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0815 17:22:36.084437   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:24:52.219103   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:25:19.926467   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-773344 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m7.485956988s)
functional_test.go:761: restart took 6m7.48608362s for "functional-773344" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (367.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-773344 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-773344 logs: (1.110712801s)
--- PASS: TestFunctional/serial/LogsCmd (1.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 logs --file /tmp/TestFunctionalserialLogsFileCmd1712344084/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-773344 logs --file /tmp/TestFunctionalserialLogsFileCmd1712344084/001/logs.txt: (1.101831642s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-773344 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-773344
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-773344: exit status 115 (262.489998ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.182:30211 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-773344 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773344 config get cpus: exit status 14 (50.400297ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773344 config get cpus: exit status 14 (43.799679ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-773344 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-773344 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 31489: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.61s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773344 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-773344 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (127.70539ms)

                                                
                                                
-- stdout --
	* [functional-773344] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:28:12.084875   31314 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:28:12.084990   31314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:28:12.085000   31314 out.go:358] Setting ErrFile to fd 2...
	I0815 17:28:12.085005   31314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:28:12.085165   31314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:28:12.085711   31314 out.go:352] Setting JSON to false
	I0815 17:28:12.086726   31314 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4238,"bootTime":1723738654,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:28:12.086784   31314 start.go:139] virtualization: kvm guest
	I0815 17:28:12.089199   31314 out.go:177] * [functional-773344] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 17:28:12.090624   31314 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:28:12.090640   31314 notify.go:220] Checking for updates...
	I0815 17:28:12.093250   31314 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:28:12.094690   31314 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:28:12.096092   31314 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:28:12.097318   31314 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:28:12.098496   31314 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:28:12.099974   31314 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:28:12.100423   31314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:28:12.100464   31314 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:28:12.115244   31314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34853
	I0815 17:28:12.115652   31314 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:28:12.116179   31314 main.go:141] libmachine: Using API Version  1
	I0815 17:28:12.116201   31314 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:28:12.116570   31314 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:28:12.116739   31314 main.go:141] libmachine: (functional-773344) Calling .DriverName
	I0815 17:28:12.116978   31314 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:28:12.117265   31314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:28:12.117311   31314 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:28:12.131535   31314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44397
	I0815 17:28:12.131946   31314 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:28:12.132416   31314 main.go:141] libmachine: Using API Version  1
	I0815 17:28:12.132433   31314 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:28:12.132727   31314 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:28:12.132919   31314 main.go:141] libmachine: (functional-773344) Calling .DriverName
	I0815 17:28:12.164615   31314 out.go:177] * Using the kvm2 driver based on existing profile
	I0815 17:28:12.165786   31314 start.go:297] selected driver: kvm2
	I0815 17:28:12.165809   31314 start.go:901] validating driver "kvm2" against &{Name:functional-773344 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-773344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:28:12.165938   31314 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:28:12.168208   31314 out.go:201] 
	W0815 17:28:12.169357   31314 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0815 17:28:12.170629   31314 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773344 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-773344 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-773344 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (132.381789ms)

                                                
                                                
-- stdout --
	* [functional-773344] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:28:12.352033   31370 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:28:12.352149   31370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:28:12.352158   31370 out.go:358] Setting ErrFile to fd 2...
	I0815 17:28:12.352162   31370 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:28:12.352448   31370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 17:28:12.352971   31370 out.go:352] Setting JSON to false
	I0815 17:28:12.353956   31370 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4238,"bootTime":1723738654,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 17:28:12.354012   31370 start.go:139] virtualization: kvm guest
	I0815 17:28:12.355972   31370 out.go:177] * [functional-773344] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0815 17:28:12.357309   31370 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:28:12.357311   31370 notify.go:220] Checking for updates...
	I0815 17:28:12.358586   31370 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:28:12.359877   31370 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 17:28:12.361241   31370 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 17:28:12.362687   31370 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 17:28:12.363958   31370 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:28:12.365801   31370 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 17:28:12.366528   31370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:28:12.366579   31370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:28:12.381369   31370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38757
	I0815 17:28:12.381769   31370 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:28:12.382327   31370 main.go:141] libmachine: Using API Version  1
	I0815 17:28:12.382354   31370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:28:12.382743   31370 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:28:12.382923   31370 main.go:141] libmachine: (functional-773344) Calling .DriverName
	I0815 17:28:12.383179   31370 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:28:12.383453   31370 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 17:28:12.383491   31370 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 17:28:12.397921   31370 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41171
	I0815 17:28:12.398296   31370 main.go:141] libmachine: () Calling .GetVersion
	I0815 17:28:12.398727   31370 main.go:141] libmachine: Using API Version  1
	I0815 17:28:12.398749   31370 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 17:28:12.399058   31370 main.go:141] libmachine: () Calling .GetMachineName
	I0815 17:28:12.399238   31370 main.go:141] libmachine: (functional-773344) Calling .DriverName
	I0815 17:28:12.430754   31370 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0815 17:28:12.432016   31370 start.go:297] selected driver: kvm2
	I0815 17:28:12.432043   31370 start.go:901] validating driver "kvm2" against &{Name:functional-773344 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19443/minikube-v1.33.1-1723650137-19443-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-773344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.182 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:28:12.432159   31370 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:28:12.434401   31370 out.go:201] 
	W0815 17:28:12.435717   31370 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0815 17:28:12.437175   31370 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-773344 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-773344 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4lmxr" [24bdd09a-a139-438a-8db3-8d703bb7c00e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-4lmxr" [24bdd09a-a139-438a-8db3-8d703bb7c00e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.296824217s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.182:31059
functional_test.go:1675: http://192.168.39.182:31059: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-4lmxr

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.182:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.182:31059
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.84s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [37da692e-bc1b-4a43-a034-8bdfabb1bbc9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00479732s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-773344 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-773344 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-773344 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-773344 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [42e7acf1-aee9-45f4-a36a-ab2c346851db] Pending
helpers_test.go:344: "sp-pod" [42e7acf1-aee9-45f4-a36a-ab2c346851db] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [42e7acf1-aee9-45f4-a36a-ab2c346851db] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.008812802s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-773344 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-773344 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-773344 delete -f testdata/storage-provisioner/pod.yaml: (3.190023956s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-773344 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7aeb29e2-06be-454f-9c0f-746f3bf8e3b6] Pending
helpers_test.go:344: "sp-pod" [7aeb29e2-06be-454f-9c0f-746f3bf8e3b6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2024/08/15 17:28:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [7aeb29e2-06be-454f-9c0f-746f3bf8e3b6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004531447s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-773344 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh -n functional-773344 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 cp functional-773344:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4211314709/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh -n functional-773344 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh -n functional-773344 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-773344 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-vnc85" [abfc6d3c-1e12-43d4-b5fc-006e23d74f7a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-vnc85" [abfc6d3c-1e12-43d4-b5fc-006e23d74f7a] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.003488334s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-773344 exec mysql-6cdb49bbb-vnc85 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-773344 exec mysql-6cdb49bbb-vnc85 -- mysql -ppassword -e "show databases;": exit status 1 (203.293607ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-773344 exec mysql-6cdb49bbb-vnc85 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/20219/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "sudo cat /etc/test/nested/copy/20219/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/20219.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "sudo cat /etc/ssl/certs/20219.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/20219.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "sudo cat /usr/share/ca-certificates/20219.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/202192.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "sudo cat /etc/ssl/certs/202192.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/202192.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "sudo cat /usr/share/ca-certificates/202192.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-773344 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773344 ssh "sudo systemctl is-active docker": exit status 1 (235.68021ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773344 ssh "sudo systemctl is-active containerd": exit status 1 (226.068023ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-773344 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-773344 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-qg2dg" [73820891-666a-47f7-bd03-0b0ad60791fc] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-qg2dg" [73820891-666a-47f7-bd03-0b0ad60791fc] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004775402s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773344 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-773344
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240730-75a5af0c
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773344 image ls --format short --alsologtostderr:
I0815 17:28:15.153323   31570 out.go:345] Setting OutFile to fd 1 ...
I0815 17:28:15.153456   31570 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:28:15.153466   31570 out.go:358] Setting ErrFile to fd 2...
I0815 17:28:15.153473   31570 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:28:15.153624   31570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
I0815 17:28:15.154156   31570 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:28:15.154277   31570 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:28:15.154649   31570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 17:28:15.154700   31570 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 17:28:15.169147   31570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37527
I0815 17:28:15.169542   31570 main.go:141] libmachine: () Calling .GetVersion
I0815 17:28:15.170023   31570 main.go:141] libmachine: Using API Version  1
I0815 17:28:15.170049   31570 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 17:28:15.170361   31570 main.go:141] libmachine: () Calling .GetMachineName
I0815 17:28:15.170550   31570 main.go:141] libmachine: (functional-773344) Calling .GetState
I0815 17:28:15.172202   31570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 17:28:15.172241   31570 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 17:28:15.188362   31570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
I0815 17:28:15.188746   31570 main.go:141] libmachine: () Calling .GetVersion
I0815 17:28:15.189173   31570 main.go:141] libmachine: Using API Version  1
I0815 17:28:15.189193   31570 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 17:28:15.189542   31570 main.go:141] libmachine: () Calling .GetMachineName
I0815 17:28:15.189732   31570 main.go:141] libmachine: (functional-773344) Calling .DriverName
I0815 17:28:15.189959   31570 ssh_runner.go:195] Run: systemctl --version
I0815 17:28:15.189982   31570 main.go:141] libmachine: (functional-773344) Calling .GetSSHHostname
I0815 17:28:15.192510   31570 main.go:141] libmachine: (functional-773344) DBG | domain functional-773344 has defined MAC address 52:54:00:ad:cf:88 in network mk-functional-773344
I0815 17:28:15.192905   31570 main.go:141] libmachine: (functional-773344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:cf:88", ip: ""} in network mk-functional-773344: {Iface:virbr1 ExpiryTime:2024-08-15 18:19:58 +0000 UTC Type:0 Mac:52:54:00:ad:cf:88 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-773344 Clientid:01:52:54:00:ad:cf:88}
I0815 17:28:15.192930   31570 main.go:141] libmachine: (functional-773344) DBG | domain functional-773344 has defined IP address 192.168.39.182 and MAC address 52:54:00:ad:cf:88 in network mk-functional-773344
I0815 17:28:15.193086   31570 main.go:141] libmachine: (functional-773344) Calling .GetSSHPort
I0815 17:28:15.193274   31570 main.go:141] libmachine: (functional-773344) Calling .GetSSHKeyPath
I0815 17:28:15.193423   31570 main.go:141] libmachine: (functional-773344) Calling .GetSSHUsername
I0815 17:28:15.193598   31570 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/functional-773344/id_rsa Username:docker}
I0815 17:28:15.275394   31570 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 17:28:15.316849   31570 main.go:141] libmachine: Making call to close driver server
I0815 17:28:15.316866   31570 main.go:141] libmachine: (functional-773344) Calling .Close
I0815 17:28:15.317181   31570 main.go:141] libmachine: (functional-773344) DBG | Closing plugin on server side
I0815 17:28:15.317182   31570 main.go:141] libmachine: Successfully made call to close driver server
I0815 17:28:15.317222   31570 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 17:28:15.317236   31570 main.go:141] libmachine: Making call to close driver server
I0815 17:28:15.317247   31570 main.go:141] libmachine: (functional-773344) Calling .Close
I0815 17:28:15.317524   31570 main.go:141] libmachine: Successfully made call to close driver server
I0815 17:28:15.317541   31570 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 17:28:15.317543   31570 main.go:141] libmachine: (functional-773344) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773344 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler          | v1.31.0            | 1766f54c897f0 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.0            | 604f5db92eaa8 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.0            | 045733566833c | 89.4MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/minikube-local-cache-test     | functional-773344  | bdf61f1dda057 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.31.0            | ad83b2ca7b09e | 92.7MB |
| docker.io/kindest/kindnetd              | v20240730-75a5af0c | 917d7814b9b5b | 87.2MB |
| docker.io/library/nginx                 | latest             | 900dca2a61f57 | 192MB  |
| localhost/my-image                      | functional-773344  | 294d8dbb6fb64 | 1.47MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773344 image ls --format table --alsologtostderr:
I0815 17:28:20.062959   31838 out.go:345] Setting OutFile to fd 1 ...
I0815 17:28:20.063075   31838 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:28:20.063083   31838 out.go:358] Setting ErrFile to fd 2...
I0815 17:28:20.063087   31838 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:28:20.063248   31838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
I0815 17:28:20.063758   31838 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:28:20.063858   31838 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:28:20.064200   31838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 17:28:20.064238   31838 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 17:28:20.078967   31838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43709
I0815 17:28:20.079398   31838 main.go:141] libmachine: () Calling .GetVersion
I0815 17:28:20.080003   31838 main.go:141] libmachine: Using API Version  1
I0815 17:28:20.080031   31838 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 17:28:20.080400   31838 main.go:141] libmachine: () Calling .GetMachineName
I0815 17:28:20.080620   31838 main.go:141] libmachine: (functional-773344) Calling .GetState
I0815 17:28:20.082452   31838 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 17:28:20.082492   31838 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 17:28:20.097943   31838 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46621
I0815 17:28:20.098436   31838 main.go:141] libmachine: () Calling .GetVersion
I0815 17:28:20.098919   31838 main.go:141] libmachine: Using API Version  1
I0815 17:28:20.098942   31838 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 17:28:20.099287   31838 main.go:141] libmachine: () Calling .GetMachineName
I0815 17:28:20.099474   31838 main.go:141] libmachine: (functional-773344) Calling .DriverName
I0815 17:28:20.099651   31838 ssh_runner.go:195] Run: systemctl --version
I0815 17:28:20.099677   31838 main.go:141] libmachine: (functional-773344) Calling .GetSSHHostname
I0815 17:28:20.102807   31838 main.go:141] libmachine: (functional-773344) DBG | domain functional-773344 has defined MAC address 52:54:00:ad:cf:88 in network mk-functional-773344
I0815 17:28:20.103276   31838 main.go:141] libmachine: (functional-773344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:cf:88", ip: ""} in network mk-functional-773344: {Iface:virbr1 ExpiryTime:2024-08-15 18:19:58 +0000 UTC Type:0 Mac:52:54:00:ad:cf:88 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-773344 Clientid:01:52:54:00:ad:cf:88}
I0815 17:28:20.103306   31838 main.go:141] libmachine: (functional-773344) DBG | domain functional-773344 has defined IP address 192.168.39.182 and MAC address 52:54:00:ad:cf:88 in network mk-functional-773344
I0815 17:28:20.103404   31838 main.go:141] libmachine: (functional-773344) Calling .GetSSHPort
I0815 17:28:20.103551   31838 main.go:141] libmachine: (functional-773344) Calling .GetSSHKeyPath
I0815 17:28:20.103698   31838 main.go:141] libmachine: (functional-773344) Calling .GetSSHUsername
I0815 17:28:20.103844   31838 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/functional-773344/id_rsa Username:docker}
I0815 17:28:20.202736   31838 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 17:28:20.265629   31838 main.go:141] libmachine: Making call to close driver server
I0815 17:28:20.265644   31838 main.go:141] libmachine: (functional-773344) Calling .Close
I0815 17:28:20.265891   31838 main.go:141] libmachine: Successfully made call to close driver server
I0815 17:28:20.265911   31838 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 17:28:20.265913   31838 main.go:141] libmachine: (functional-773344) DBG | Closing plugin on server side
I0815 17:28:20.265928   31838 main.go:141] libmachine: Making call to close driver server
I0815 17:28:20.265936   31838 main.go:141] libmachine: (functional-773344) Calling .Close
I0815 17:28:20.266193   31838 main.go:141] libmachine: Successfully made call to close driver server
I0815 17:28:20.266210   31838 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 17:28:20.266244   31838 main.go:141] libmachine: (functional-773344) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773344 image ls --format json --alsologtostderr:
[{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":["registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf","registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"92728217"},{"id":"5107333e08a87b836d
48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"294d8dbb6fb648cb7e15dddaa9bd4bc857595e660221a5852a8f21c202f9037e","repoDigests":["localhost/my-image@sha256:c072be29d95c0ae600d11fd2fa98aaeb5c726bd1a1efec4c5e5f976497c0fa4c"],"repoTags":["localhost/my-image:functional-773344"],"size":"1468600"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[
"registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a","registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"68420936"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2
e74cdcb91347869bfb4fd3a26cd3c3","docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"87165492"},{"id":"da79f4fc9315659622082a15da574c7331749ce001cef62611195618cb86813d","repoDigests":["docker.io/library/2f307c97a75b1005a7fbd760f0b930279165d4c2808f12ea2f6b28e556c6113f-tmp@sha256:181bea2a7091f647951fc261510bd80899f1365a837b2933ddbb5cc008fc889a"],"repoTags":[],"size":"1466018"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400
c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf","registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"95233506"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d","registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"89437512"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b13
3eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"900dca2a61f5799aabe662339a940cf444dfd39777648ca6a953f82b685997ed","repoDigests":["docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40","docker.io/library/nginx@sha256:a3ab061d6909191271bcf24b9ab6eee9e8fc5f2fbf1525c5bd84d21f27a9d708"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a
85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"bdf61f1dda05722ad960134243a0d343ef568aa63898959e0c18cc05cf2cd819","repoDigests":["localhost/minikube-local-cache-test@sha256:5a4666ba0ecf30406ac8d54f89c068930f9dc6922a037bb692dfb286f6987d1e"],"repoTags":["localhost/minikube-local-cache-test:functional-773344"],"size":"3330"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773344 image ls --format json --alsologtostderr:
I0815 17:28:19.770410   31791 out.go:345] Setting OutFile to fd 1 ...
I0815 17:28:19.770667   31791 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:28:19.770679   31791 out.go:358] Setting ErrFile to fd 2...
I0815 17:28:19.770686   31791 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:28:19.770948   31791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
I0815 17:28:19.771733   31791 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:28:19.771867   31791 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:28:19.772400   31791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 17:28:19.772457   31791 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 17:28:19.788528   31791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46143
I0815 17:28:19.789012   31791 main.go:141] libmachine: () Calling .GetVersion
I0815 17:28:19.789596   31791 main.go:141] libmachine: Using API Version  1
I0815 17:28:19.789617   31791 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 17:28:19.789966   31791 main.go:141] libmachine: () Calling .GetMachineName
I0815 17:28:19.790121   31791 main.go:141] libmachine: (functional-773344) Calling .GetState
I0815 17:28:19.791885   31791 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 17:28:19.791930   31791 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 17:28:19.806981   31791 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45367
I0815 17:28:19.807354   31791 main.go:141] libmachine: () Calling .GetVersion
I0815 17:28:19.807826   31791 main.go:141] libmachine: Using API Version  1
I0815 17:28:19.807850   31791 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 17:28:19.808225   31791 main.go:141] libmachine: () Calling .GetMachineName
I0815 17:28:19.808454   31791 main.go:141] libmachine: (functional-773344) Calling .DriverName
I0815 17:28:19.808666   31791 ssh_runner.go:195] Run: systemctl --version
I0815 17:28:19.808702   31791 main.go:141] libmachine: (functional-773344) Calling .GetSSHHostname
I0815 17:28:19.811648   31791 main.go:141] libmachine: (functional-773344) DBG | domain functional-773344 has defined MAC address 52:54:00:ad:cf:88 in network mk-functional-773344
I0815 17:28:19.812009   31791 main.go:141] libmachine: (functional-773344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:cf:88", ip: ""} in network mk-functional-773344: {Iface:virbr1 ExpiryTime:2024-08-15 18:19:58 +0000 UTC Type:0 Mac:52:54:00:ad:cf:88 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-773344 Clientid:01:52:54:00:ad:cf:88}
I0815 17:28:19.812042   31791 main.go:141] libmachine: (functional-773344) DBG | domain functional-773344 has defined IP address 192.168.39.182 and MAC address 52:54:00:ad:cf:88 in network mk-functional-773344
I0815 17:28:19.812116   31791 main.go:141] libmachine: (functional-773344) Calling .GetSSHPort
I0815 17:28:19.812307   31791 main.go:141] libmachine: (functional-773344) Calling .GetSSHKeyPath
I0815 17:28:19.812453   31791 main.go:141] libmachine: (functional-773344) Calling .GetSSHUsername
I0815 17:28:19.812630   31791 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/functional-773344/id_rsa Username:docker}
I0815 17:28:19.947235   31791 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 17:28:20.010622   31791 main.go:141] libmachine: Making call to close driver server
I0815 17:28:20.010638   31791 main.go:141] libmachine: (functional-773344) Calling .Close
I0815 17:28:20.010894   31791 main.go:141] libmachine: Successfully made call to close driver server
I0815 17:28:20.010910   31791 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 17:28:20.010919   31791 main.go:141] libmachine: Making call to close driver server
I0815 17:28:20.010925   31791 main.go:141] libmachine: (functional-773344) DBG | Closing plugin on server side
I0815 17:28:20.010927   31791 main.go:141] libmachine: (functional-773344) Calling .Close
I0815 17:28:20.011181   31791 main.go:141] libmachine: Successfully made call to close driver server
I0815 17:28:20.011196   31791 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 17:28:20.011214   31791 main.go:141] libmachine: (functional-773344) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773344 image ls --format yaml --alsologtostderr:
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 900dca2a61f5799aabe662339a940cf444dfd39777648ca6a953f82b685997ed
repoDigests:
- docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40
- docker.io/library/nginx@sha256:a3ab061d6909191271bcf24b9ab6eee9e8fc5f2fbf1525c5bd84d21f27a9d708
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: bdf61f1dda05722ad960134243a0d343ef568aa63898959e0c18cc05cf2cd819
repoDigests:
- localhost/minikube-local-cache-test@sha256:5a4666ba0ecf30406ac8d54f89c068930f9dc6922a037bb692dfb286f6987d1e
repoTags:
- localhost/minikube-local-cache-test:functional-773344
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:882f6d647c82b1edde30693f22643a8120d2b650469ca572e5c321f97192159a
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "68420936"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:678c105505947334539633c8eaf6999452dafaff0d23bdbb55e0729285fcfc5d
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "89437512"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c31eb37ccda83c6c8ee2ee5030a7038b04ecaa393d14cb71f01ab18147366fbf
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "92728217"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
- registry.k8s.io/kube-apiserver@sha256:64c595846c29945f619a1c3d420a8bfac87e93cb8d3641e222dd9ac412284001
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "95233506"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 917d7814b9b5b870a04ef7bbee5414485a7ba8385be9c26e1df27463f6fb6557
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
- docker.io/kindest/kindnetd@sha256:60b58d454ebdf7f0f66e7550fc2be7b6f08dee8b39bedd62c26d42c3a5cf5c6a
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "87165492"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773344 image ls --format yaml --alsologtostderr:
I0815 17:28:15.360648   31594 out.go:345] Setting OutFile to fd 1 ...
I0815 17:28:15.360905   31594 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:28:15.360915   31594 out.go:358] Setting ErrFile to fd 2...
I0815 17:28:15.360920   31594 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:28:15.361145   31594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
I0815 17:28:15.362511   31594 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:28:15.362795   31594 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:28:15.363262   31594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 17:28:15.363306   31594 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 17:28:15.378410   31594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37129
I0815 17:28:15.378774   31594 main.go:141] libmachine: () Calling .GetVersion
I0815 17:28:15.379271   31594 main.go:141] libmachine: Using API Version  1
I0815 17:28:15.379291   31594 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 17:28:15.379607   31594 main.go:141] libmachine: () Calling .GetMachineName
I0815 17:28:15.379771   31594 main.go:141] libmachine: (functional-773344) Calling .GetState
I0815 17:28:15.381368   31594 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 17:28:15.381402   31594 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 17:28:15.395853   31594 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41795
I0815 17:28:15.396227   31594 main.go:141] libmachine: () Calling .GetVersion
I0815 17:28:15.396647   31594 main.go:141] libmachine: Using API Version  1
I0815 17:28:15.396664   31594 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 17:28:15.396952   31594 main.go:141] libmachine: () Calling .GetMachineName
I0815 17:28:15.397132   31594 main.go:141] libmachine: (functional-773344) Calling .DriverName
I0815 17:28:15.397327   31594 ssh_runner.go:195] Run: systemctl --version
I0815 17:28:15.397354   31594 main.go:141] libmachine: (functional-773344) Calling .GetSSHHostname
I0815 17:28:15.399572   31594 main.go:141] libmachine: (functional-773344) DBG | domain functional-773344 has defined MAC address 52:54:00:ad:cf:88 in network mk-functional-773344
I0815 17:28:15.399925   31594 main.go:141] libmachine: (functional-773344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:cf:88", ip: ""} in network mk-functional-773344: {Iface:virbr1 ExpiryTime:2024-08-15 18:19:58 +0000 UTC Type:0 Mac:52:54:00:ad:cf:88 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-773344 Clientid:01:52:54:00:ad:cf:88}
I0815 17:28:15.399948   31594 main.go:141] libmachine: (functional-773344) DBG | domain functional-773344 has defined IP address 192.168.39.182 and MAC address 52:54:00:ad:cf:88 in network mk-functional-773344
I0815 17:28:15.400108   31594 main.go:141] libmachine: (functional-773344) Calling .GetSSHPort
I0815 17:28:15.400247   31594 main.go:141] libmachine: (functional-773344) Calling .GetSSHKeyPath
I0815 17:28:15.400367   31594 main.go:141] libmachine: (functional-773344) Calling .GetSSHUsername
I0815 17:28:15.400516   31594 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/functional-773344/id_rsa Username:docker}
I0815 17:28:15.479473   31594 ssh_runner.go:195] Run: sudo crictl images --output json
I0815 17:28:15.521147   31594 main.go:141] libmachine: Making call to close driver server
I0815 17:28:15.521159   31594 main.go:141] libmachine: (functional-773344) Calling .Close
I0815 17:28:15.521392   31594 main.go:141] libmachine: Successfully made call to close driver server
I0815 17:28:15.521412   31594 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 17:28:15.521424   31594 main.go:141] libmachine: Making call to close driver server
I0815 17:28:15.521434   31594 main.go:141] libmachine: (functional-773344) Calling .Close
I0815 17:28:15.521659   31594 main.go:141] libmachine: Successfully made call to close driver server
I0815 17:28:15.521673   31594 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 17:28:15.521700   31594 main.go:141] libmachine: (functional-773344) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773344 ssh pgrep buildkitd: exit status 1 (178.787749ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image build -t localhost/my-image:functional-773344 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-773344 image build -t localhost/my-image:functional-773344 testdata/build --alsologtostderr: (3.751841326s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-773344 image build -t localhost/my-image:functional-773344 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> da79f4fc931
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-773344
--> 294d8dbb6fb
Successfully tagged localhost/my-image:functional-773344
294d8dbb6fb648cb7e15dddaa9bd4bc857595e660221a5852a8f21c202f9037e
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-773344 image build -t localhost/my-image:functional-773344 testdata/build --alsologtostderr:
I0815 17:28:15.746791   31649 out.go:345] Setting OutFile to fd 1 ...
I0815 17:28:15.747034   31649 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:28:15.747042   31649 out.go:358] Setting ErrFile to fd 2...
I0815 17:28:15.747046   31649 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:28:15.747190   31649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
I0815 17:28:15.747695   31649 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:28:15.748217   31649 config.go:182] Loaded profile config "functional-773344": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
I0815 17:28:15.748643   31649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 17:28:15.748683   31649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 17:28:15.763116   31649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42791
I0815 17:28:15.763591   31649 main.go:141] libmachine: () Calling .GetVersion
I0815 17:28:15.764235   31649 main.go:141] libmachine: Using API Version  1
I0815 17:28:15.764258   31649 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 17:28:15.764571   31649 main.go:141] libmachine: () Calling .GetMachineName
I0815 17:28:15.764764   31649 main.go:141] libmachine: (functional-773344) Calling .GetState
I0815 17:28:15.766406   31649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0815 17:28:15.766447   31649 main.go:141] libmachine: Launching plugin server for driver kvm2
I0815 17:28:15.780902   31649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46017
I0815 17:28:15.781274   31649 main.go:141] libmachine: () Calling .GetVersion
I0815 17:28:15.781734   31649 main.go:141] libmachine: Using API Version  1
I0815 17:28:15.781756   31649 main.go:141] libmachine: () Calling .SetConfigRaw
I0815 17:28:15.782094   31649 main.go:141] libmachine: () Calling .GetMachineName
I0815 17:28:15.782260   31649 main.go:141] libmachine: (functional-773344) Calling .DriverName
I0815 17:28:15.782489   31649 ssh_runner.go:195] Run: systemctl --version
I0815 17:28:15.782531   31649 main.go:141] libmachine: (functional-773344) Calling .GetSSHHostname
I0815 17:28:15.785555   31649 main.go:141] libmachine: (functional-773344) DBG | domain functional-773344 has defined MAC address 52:54:00:ad:cf:88 in network mk-functional-773344
I0815 17:28:15.786001   31649 main.go:141] libmachine: (functional-773344) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:cf:88", ip: ""} in network mk-functional-773344: {Iface:virbr1 ExpiryTime:2024-08-15 18:19:58 +0000 UTC Type:0 Mac:52:54:00:ad:cf:88 Iaid: IPaddr:192.168.39.182 Prefix:24 Hostname:functional-773344 Clientid:01:52:54:00:ad:cf:88}
I0815 17:28:15.786035   31649 main.go:141] libmachine: (functional-773344) DBG | domain functional-773344 has defined IP address 192.168.39.182 and MAC address 52:54:00:ad:cf:88 in network mk-functional-773344
I0815 17:28:15.786238   31649 main.go:141] libmachine: (functional-773344) Calling .GetSSHPort
I0815 17:28:15.786417   31649 main.go:141] libmachine: (functional-773344) Calling .GetSSHKeyPath
I0815 17:28:15.786556   31649 main.go:141] libmachine: (functional-773344) Calling .GetSSHUsername
I0815 17:28:15.786690   31649 sshutil.go:53] new ssh client: &{IP:192.168.39.182 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/functional-773344/id_rsa Username:docker}
I0815 17:28:15.863122   31649 build_images.go:161] Building image from path: /tmp/build.2776543797.tar
I0815 17:28:15.863180   31649 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0815 17:28:15.874112   31649 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2776543797.tar
I0815 17:28:15.878593   31649 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2776543797.tar: stat -c "%s %y" /var/lib/minikube/build/build.2776543797.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2776543797.tar': No such file or directory
I0815 17:28:15.878624   31649 ssh_runner.go:362] scp /tmp/build.2776543797.tar --> /var/lib/minikube/build/build.2776543797.tar (3072 bytes)
I0815 17:28:15.910165   31649 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2776543797
I0815 17:28:15.921126   31649 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2776543797 -xf /var/lib/minikube/build/build.2776543797.tar
I0815 17:28:15.930461   31649 crio.go:315] Building image: /var/lib/minikube/build/build.2776543797
I0815 17:28:15.930525   31649 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-773344 /var/lib/minikube/build/build.2776543797 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0815 17:28:19.412995   31649 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-773344 /var/lib/minikube/build/build.2776543797 --cgroup-manager=cgroupfs: (3.482440894s)
I0815 17:28:19.413079   31649 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2776543797
I0815 17:28:19.435192   31649 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2776543797.tar
I0815 17:28:19.451259   31649 build_images.go:217] Built localhost/my-image:functional-773344 from /tmp/build.2776543797.tar
I0815 17:28:19.451291   31649 build_images.go:133] succeeded building to: functional-773344
I0815 17:28:19.451296   31649 build_images.go:134] failed building to: 
I0815 17:28:19.451314   31649 main.go:141] libmachine: Making call to close driver server
I0815 17:28:19.451325   31649 main.go:141] libmachine: (functional-773344) Calling .Close
I0815 17:28:19.451555   31649 main.go:141] libmachine: Successfully made call to close driver server
I0815 17:28:19.451574   31649 main.go:141] libmachine: Making call to close connection to plugin binary
I0815 17:28:19.451583   31649 main.go:141] libmachine: Making call to close driver server
I0815 17:28:19.451590   31649 main.go:141] libmachine: (functional-773344) DBG | Closing plugin on server side
I0815 17:28:19.451593   31649 main.go:141] libmachine: (functional-773344) Calling .Close
I0815 17:28:19.451792   31649 main.go:141] libmachine: Successfully made call to close driver server
I0815 17:28:19.451802   31649 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.980121212s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-773344
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image load --daemon kicbase/echo-server:functional-773344 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-773344 image load --daemon kicbase/echo-server:functional-773344 --alsologtostderr: (1.789050348s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image load --daemon kicbase/echo-server:functional-773344 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-773344
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image load --daemon kicbase/echo-server:functional-773344 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 image save kicbase/echo-server:functional-773344 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 service list -o json
functional_test.go:1494: Took "314.280751ms" to run "out/minikube-linux-amd64 -p functional-773344 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.182:31897
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.182:31897
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "265.551748ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "43.447389ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "207.777578ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "44.160298ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773344 /tmp/TestFunctionalparallelMountCmdany-port181222934/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723742887448182104" to /tmp/TestFunctionalparallelMountCmdany-port181222934/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723742887448182104" to /tmp/TestFunctionalparallelMountCmdany-port181222934/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723742887448182104" to /tmp/TestFunctionalparallelMountCmdany-port181222934/001/test-1723742887448182104
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773344 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (173.3031ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 15 17:28 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 15 17:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 15 17:28 test-1723742887448182104
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh cat /mount-9p/test-1723742887448182104
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-773344 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4abcf8f2-dcc9-4a70-b8ea-4f9fcb72c67e] Pending
helpers_test.go:344: "busybox-mount" [4abcf8f2-dcc9-4a70-b8ea-4f9fcb72c67e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4abcf8f2-dcc9-4a70-b8ea-4f9fcb72c67e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4abcf8f2-dcc9-4a70-b8ea-4f9fcb72c67e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 10.003799008s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-773344 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773344 /tmp/TestFunctionalparallelMountCmdany-port181222934/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773344 /tmp/TestFunctionalparallelMountCmdspecific-port1973475938/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773344 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (215.291112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773344 /tmp/TestFunctionalparallelMountCmdspecific-port1973475938/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773344 ssh "sudo umount -f /mount-9p": exit status 1 (219.212293ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-773344 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773344 /tmp/TestFunctionalparallelMountCmdspecific-port1973475938/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773344 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3684289187/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773344 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3684289187/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-773344 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3684289187/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-773344 ssh "findmnt -T" /mount1: exit status 1 (287.030975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-773344 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-773344 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773344 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3684289187/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773344 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3684289187/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-773344 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3684289187/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-773344
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-773344
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-773344
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (253.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-683878 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0815 17:29:52.218133   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-683878 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m12.481747172s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
E0815 17:32:47.733702   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:32:47.741015   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:32:47.752471   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:32:47.773937   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:32:47.815334   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/StartCluster (253.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
E0815 17:32:47.897062   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:32:48.059053   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- rollout status deployment/busybox
E0815 17:32:48.381171   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:32:49.023026   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:32:50.304641   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:32:52.865928   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-683878 -- rollout status deployment/busybox: (5.355547624s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-j8h8r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-lgsr4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-sk47b -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-j8h8r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-lgsr4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-sk47b -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-j8h8r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-lgsr4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-sk47b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-j8h8r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-j8h8r -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-lgsr4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-lgsr4 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-sk47b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-683878 -- exec busybox-7dff88458-sk47b -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-683878 -v=7 --alsologtostderr
E0815 17:32:57.987591   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:33:08.229743   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:33:28.711822   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-683878 -v=7 --alsologtostderr: (55.838689191s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-683878 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp testdata/cp-test.txt ha-683878:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3030958127/001/cp-test_ha-683878.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878:/home/docker/cp-test.txt ha-683878-m02:/home/docker/cp-test_ha-683878_ha-683878-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m02 "sudo cat /home/docker/cp-test_ha-683878_ha-683878-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878:/home/docker/cp-test.txt ha-683878-m03:/home/docker/cp-test_ha-683878_ha-683878-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m03 "sudo cat /home/docker/cp-test_ha-683878_ha-683878-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878:/home/docker/cp-test.txt ha-683878-m04:/home/docker/cp-test_ha-683878_ha-683878-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m04 "sudo cat /home/docker/cp-test_ha-683878_ha-683878-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp testdata/cp-test.txt ha-683878-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3030958127/001/cp-test_ha-683878-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878-m02:/home/docker/cp-test.txt ha-683878:/home/docker/cp-test_ha-683878-m02_ha-683878.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878 "sudo cat /home/docker/cp-test_ha-683878-m02_ha-683878.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878-m02:/home/docker/cp-test.txt ha-683878-m03:/home/docker/cp-test_ha-683878-m02_ha-683878-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m03 "sudo cat /home/docker/cp-test_ha-683878-m02_ha-683878-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878-m02:/home/docker/cp-test.txt ha-683878-m04:/home/docker/cp-test_ha-683878-m02_ha-683878-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m04 "sudo cat /home/docker/cp-test_ha-683878-m02_ha-683878-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp testdata/cp-test.txt ha-683878-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3030958127/001/cp-test_ha-683878-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt ha-683878:/home/docker/cp-test_ha-683878-m03_ha-683878.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878 "sudo cat /home/docker/cp-test_ha-683878-m03_ha-683878.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt ha-683878-m02:/home/docker/cp-test_ha-683878-m03_ha-683878-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m02 "sudo cat /home/docker/cp-test_ha-683878-m03_ha-683878-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878-m03:/home/docker/cp-test.txt ha-683878-m04:/home/docker/cp-test_ha-683878-m03_ha-683878-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m04 "sudo cat /home/docker/cp-test_ha-683878-m03_ha-683878-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp testdata/cp-test.txt ha-683878-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3030958127/001/cp-test_ha-683878-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt ha-683878:/home/docker/cp-test_ha-683878-m04_ha-683878.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878 "sudo cat /home/docker/cp-test_ha-683878-m04_ha-683878.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt ha-683878-m02:/home/docker/cp-test_ha-683878-m04_ha-683878-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m02 "sudo cat /home/docker/cp-test_ha-683878-m04_ha-683878-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 cp ha-683878-m04:/home/docker/cp-test.txt ha-683878-m03:/home/docker/cp-test_ha-683878-m04_ha-683878-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 ssh -n ha-683878-m03 "sudo cat /home/docker/cp-test_ha-683878-m04_ha-683878-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.459531025s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-683878 node delete m03 -v=7 --alsologtostderr: (15.743226274s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (355.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-683878 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0815 17:47:47.733446   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:10.798691   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:49:52.218115   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-683878 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m55.060397812s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (355.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-683878 --control-plane -v=7 --alsologtostderr
E0815 17:52:47.734477   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:52:55.290809   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-683878 --control-plane -v=7 --alsologtostderr: (1m18.225335667s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-683878 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (87.67s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-401817 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0815 17:54:52.219204   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-401817 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m27.666178588s)
--- PASS: TestJSONOutput/start/Command (87.67s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-401817 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-401817 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-401817 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-401817 --output=json --user=testUser: (7.332994865s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-540923 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-540923 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.39264ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b83c82d0-31d4-445b-b7fe-5c7845a87ea8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-540923] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe465b9e-7e5f-4e72-8f4f-5d8d9d1a9c2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19450"}}
	{"specversion":"1.0","id":"e5ad8030-24b4-442f-8f4b-ea849bf163a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"17afe9b2-6a5f-4cfc-a948-9ed927f01b9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig"}}
	{"specversion":"1.0","id":"05c716ec-6bb2-457b-8982-7fd1da251055","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube"}}
	{"specversion":"1.0","id":"1795259b-dc3d-4b74-a17e-ebff0e237c1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"08a3b0e6-1b43-4221-8e15-6872fa206029","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7b09b3e5-3496-44bf-8027-6a230950fd4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-540923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-540923
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (92.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-282120 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-282120 --driver=kvm2  --container-runtime=crio: (45.617698732s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-284376 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-284376 --driver=kvm2  --container-runtime=crio: (44.001466688s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-282120
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-284376
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-284376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-284376
helpers_test.go:175: Cleaning up "first-282120" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-282120
--- PASS: TestMinikubeProfile (92.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-875414 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-875414 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.526196871s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-875414 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-875414 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-887768 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0815 17:57:47.734347   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-887768 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.99580114s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-887768 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-887768 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-875414 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-887768 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-887768 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-887768
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-887768: (1.264651303s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.19s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-887768
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-887768: (21.187777883s)
--- PASS: TestMountStart/serial/RestartStopped (22.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-887768 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-887768 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (113.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-769827 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0815 17:59:52.218970   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-769827 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m53.003806421s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (113.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-769827 -- rollout status deployment/busybox: (4.480056292s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- exec busybox-7dff88458-jrvlv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- exec busybox-7dff88458-xvq5s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- exec busybox-7dff88458-jrvlv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- exec busybox-7dff88458-xvq5s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- exec busybox-7dff88458-jrvlv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- exec busybox-7dff88458-xvq5s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- exec busybox-7dff88458-jrvlv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- exec busybox-7dff88458-jrvlv -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- exec busybox-7dff88458-xvq5s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-769827 -- exec busybox-7dff88458-xvq5s -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-769827 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-769827 -v 3 --alsologtostderr: (49.442868437s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.00s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-769827 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 cp testdata/cp-test.txt multinode-769827:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 cp multinode-769827:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3791465198/001/cp-test_multinode-769827.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 cp multinode-769827:/home/docker/cp-test.txt multinode-769827-m02:/home/docker/cp-test_multinode-769827_multinode-769827-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827-m02 "sudo cat /home/docker/cp-test_multinode-769827_multinode-769827-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 cp multinode-769827:/home/docker/cp-test.txt multinode-769827-m03:/home/docker/cp-test_multinode-769827_multinode-769827-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827-m03 "sudo cat /home/docker/cp-test_multinode-769827_multinode-769827-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 cp testdata/cp-test.txt multinode-769827-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 cp multinode-769827-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3791465198/001/cp-test_multinode-769827-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 cp multinode-769827-m02:/home/docker/cp-test.txt multinode-769827:/home/docker/cp-test_multinode-769827-m02_multinode-769827.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827 "sudo cat /home/docker/cp-test_multinode-769827-m02_multinode-769827.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 cp multinode-769827-m02:/home/docker/cp-test.txt multinode-769827-m03:/home/docker/cp-test_multinode-769827-m02_multinode-769827-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827-m03 "sudo cat /home/docker/cp-test_multinode-769827-m02_multinode-769827-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 cp testdata/cp-test.txt multinode-769827-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 cp multinode-769827-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3791465198/001/cp-test_multinode-769827-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 cp multinode-769827-m03:/home/docker/cp-test.txt multinode-769827:/home/docker/cp-test_multinode-769827-m03_multinode-769827.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827 "sudo cat /home/docker/cp-test_multinode-769827-m03_multinode-769827.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 cp multinode-769827-m03:/home/docker/cp-test.txt multinode-769827-m02:/home/docker/cp-test_multinode-769827-m03_multinode-769827-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 ssh -n multinode-769827-m02 "sudo cat /home/docker/cp-test_multinode-769827-m03_multinode-769827-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.98s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-769827 node stop m03: (1.476657678s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-769827 status: exit status 7 (402.406063ms)

                                                
                                                
-- stdout --
	multinode-769827
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-769827-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-769827-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-769827 status --alsologtostderr: exit status 7 (408.215238ms)

                                                
                                                
-- stdout --
	multinode-769827
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-769827-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-769827-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 18:01:22.196154   49803 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:01:22.196284   49803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:01:22.196293   49803 out.go:358] Setting ErrFile to fd 2...
	I0815 18:01:22.196297   49803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:01:22.196459   49803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:01:22.196652   49803 out.go:352] Setting JSON to false
	I0815 18:01:22.196677   49803 mustload.go:65] Loading cluster: multinode-769827
	I0815 18:01:22.196725   49803 notify.go:220] Checking for updates...
	I0815 18:01:22.197000   49803 config.go:182] Loaded profile config "multinode-769827": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:01:22.197013   49803 status.go:255] checking status of multinode-769827 ...
	I0815 18:01:22.197351   49803 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:01:22.197411   49803 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:01:22.217360   49803 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I0815 18:01:22.217769   49803 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:01:22.218355   49803 main.go:141] libmachine: Using API Version  1
	I0815 18:01:22.218378   49803 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:01:22.218749   49803 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:01:22.218921   49803 main.go:141] libmachine: (multinode-769827) Calling .GetState
	I0815 18:01:22.220454   49803 status.go:330] multinode-769827 host status = "Running" (err=<nil>)
	I0815 18:01:22.220470   49803 host.go:66] Checking if "multinode-769827" exists ...
	I0815 18:01:22.220759   49803 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:01:22.220789   49803 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:01:22.235596   49803 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33459
	I0815 18:01:22.236053   49803 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:01:22.236477   49803 main.go:141] libmachine: Using API Version  1
	I0815 18:01:22.236514   49803 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:01:22.236843   49803 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:01:22.237033   49803 main.go:141] libmachine: (multinode-769827) Calling .GetIP
	I0815 18:01:22.239548   49803 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:01:22.239931   49803 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:01:22.239966   49803 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:01:22.240040   49803 host.go:66] Checking if "multinode-769827" exists ...
	I0815 18:01:22.240363   49803 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:01:22.240401   49803 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:01:22.254854   49803 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40705
	I0815 18:01:22.255185   49803 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:01:22.255629   49803 main.go:141] libmachine: Using API Version  1
	I0815 18:01:22.255650   49803 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:01:22.255949   49803 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:01:22.256144   49803 main.go:141] libmachine: (multinode-769827) Calling .DriverName
	I0815 18:01:22.256336   49803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 18:01:22.256363   49803 main.go:141] libmachine: (multinode-769827) Calling .GetSSHHostname
	I0815 18:01:22.258740   49803 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:01:22.259080   49803 main.go:141] libmachine: (multinode-769827) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:7f:ec", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:58:37 +0000 UTC Type:0 Mac:52:54:00:80:7f:ec Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-769827 Clientid:01:52:54:00:80:7f:ec}
	I0815 18:01:22.259102   49803 main.go:141] libmachine: (multinode-769827) DBG | domain multinode-769827 has defined IP address 192.168.39.73 and MAC address 52:54:00:80:7f:ec in network mk-multinode-769827
	I0815 18:01:22.259250   49803 main.go:141] libmachine: (multinode-769827) Calling .GetSSHPort
	I0815 18:01:22.259407   49803 main.go:141] libmachine: (multinode-769827) Calling .GetSSHKeyPath
	I0815 18:01:22.259542   49803 main.go:141] libmachine: (multinode-769827) Calling .GetSSHUsername
	I0815 18:01:22.259684   49803 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/multinode-769827/id_rsa Username:docker}
	I0815 18:01:22.344339   49803 ssh_runner.go:195] Run: systemctl --version
	I0815 18:01:22.350554   49803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:01:22.365208   49803 kubeconfig.go:125] found "multinode-769827" server: "https://192.168.39.73:8443"
	I0815 18:01:22.365235   49803 api_server.go:166] Checking apiserver status ...
	I0815 18:01:22.365264   49803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 18:01:22.378680   49803 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1142/cgroup
	W0815 18:01:22.388160   49803 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1142/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0815 18:01:22.388214   49803 ssh_runner.go:195] Run: ls
	I0815 18:01:22.392426   49803 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I0815 18:01:22.397216   49803 api_server.go:279] https://192.168.39.73:8443/healthz returned 200:
	ok
	I0815 18:01:22.397235   49803 status.go:422] multinode-769827 apiserver status = Running (err=<nil>)
	I0815 18:01:22.397249   49803 status.go:257] multinode-769827 status: &{Name:multinode-769827 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 18:01:22.397266   49803 status.go:255] checking status of multinode-769827-m02 ...
	I0815 18:01:22.397584   49803 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:01:22.397620   49803 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:01:22.412454   49803 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39981
	I0815 18:01:22.412834   49803 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:01:22.413221   49803 main.go:141] libmachine: Using API Version  1
	I0815 18:01:22.413238   49803 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:01:22.413539   49803 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:01:22.413692   49803 main.go:141] libmachine: (multinode-769827-m02) Calling .GetState
	I0815 18:01:22.415113   49803 status.go:330] multinode-769827-m02 host status = "Running" (err=<nil>)
	I0815 18:01:22.415127   49803 host.go:66] Checking if "multinode-769827-m02" exists ...
	I0815 18:01:22.415419   49803 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:01:22.415448   49803 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:01:22.429978   49803 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33183
	I0815 18:01:22.430416   49803 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:01:22.430833   49803 main.go:141] libmachine: Using API Version  1
	I0815 18:01:22.430851   49803 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:01:22.431130   49803 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:01:22.431301   49803 main.go:141] libmachine: (multinode-769827-m02) Calling .GetIP
	I0815 18:01:22.433873   49803 main.go:141] libmachine: (multinode-769827-m02) DBG | domain multinode-769827-m02 has defined MAC address 52:54:00:82:93:d6 in network mk-multinode-769827
	I0815 18:01:22.434269   49803 main.go:141] libmachine: (multinode-769827-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:93:d6", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:59:40 +0000 UTC Type:0 Mac:52:54:00:82:93:d6 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-769827-m02 Clientid:01:52:54:00:82:93:d6}
	I0815 18:01:22.434313   49803 main.go:141] libmachine: (multinode-769827-m02) DBG | domain multinode-769827-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:82:93:d6 in network mk-multinode-769827
	I0815 18:01:22.434451   49803 host.go:66] Checking if "multinode-769827-m02" exists ...
	I0815 18:01:22.434724   49803 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:01:22.434762   49803 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:01:22.449440   49803 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42305
	I0815 18:01:22.449827   49803 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:01:22.450249   49803 main.go:141] libmachine: Using API Version  1
	I0815 18:01:22.450268   49803 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:01:22.450527   49803 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:01:22.450668   49803 main.go:141] libmachine: (multinode-769827-m02) Calling .DriverName
	I0815 18:01:22.450803   49803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 18:01:22.450817   49803 main.go:141] libmachine: (multinode-769827-m02) Calling .GetSSHHostname
	I0815 18:01:22.453283   49803 main.go:141] libmachine: (multinode-769827-m02) DBG | domain multinode-769827-m02 has defined MAC address 52:54:00:82:93:d6 in network mk-multinode-769827
	I0815 18:01:22.453846   49803 main.go:141] libmachine: (multinode-769827-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:93:d6", ip: ""} in network mk-multinode-769827: {Iface:virbr1 ExpiryTime:2024-08-15 18:59:40 +0000 UTC Type:0 Mac:52:54:00:82:93:d6 Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:multinode-769827-m02 Clientid:01:52:54:00:82:93:d6}
	I0815 18:01:22.453875   49803 main.go:141] libmachine: (multinode-769827-m02) DBG | domain multinode-769827-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:82:93:d6 in network mk-multinode-769827
	I0815 18:01:22.454017   49803 main.go:141] libmachine: (multinode-769827-m02) Calling .GetSSHPort
	I0815 18:01:22.454207   49803 main.go:141] libmachine: (multinode-769827-m02) Calling .GetSSHKeyPath
	I0815 18:01:22.454383   49803 main.go:141] libmachine: (multinode-769827-m02) Calling .GetSSHUsername
	I0815 18:01:22.454614   49803 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19450-13013/.minikube/machines/multinode-769827-m02/id_rsa Username:docker}
	I0815 18:01:22.531416   49803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 18:01:22.546085   49803 status.go:257] multinode-769827-m02 status: &{Name:multinode-769827-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0815 18:01:22.546117   49803 status.go:255] checking status of multinode-769827-m03 ...
	I0815 18:01:22.546409   49803 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0815 18:01:22.546444   49803 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0815 18:01:22.561737   49803 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36287
	I0815 18:01:22.562145   49803 main.go:141] libmachine: () Calling .GetVersion
	I0815 18:01:22.562551   49803 main.go:141] libmachine: Using API Version  1
	I0815 18:01:22.562572   49803 main.go:141] libmachine: () Calling .SetConfigRaw
	I0815 18:01:22.562806   49803 main.go:141] libmachine: () Calling .GetMachineName
	I0815 18:01:22.562958   49803 main.go:141] libmachine: (multinode-769827-m03) Calling .GetState
	I0815 18:01:22.564411   49803 status.go:330] multinode-769827-m03 host status = "Stopped" (err=<nil>)
	I0815 18:01:22.564425   49803 status.go:343] host is not running, skipping remaining checks
	I0815 18:01:22.564431   49803 status.go:257] multinode-769827-m03 status: &{Name:multinode-769827-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-769827 node start m03 -v=7 --alsologtostderr: (39.443637035s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (40.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-769827 node delete m03: (1.45446577s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (207.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-769827 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0815 18:12:47.734370   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-769827 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m27.399950653s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-769827 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (207.91s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (40.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-769827
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-769827-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-769827-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (60.676265ms)

                                                
                                                
-- stdout --
	* [multinode-769827-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-769827-m02' is duplicated with machine name 'multinode-769827-m02' in profile 'multinode-769827'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-769827-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-769827-m03 --driver=kvm2  --container-runtime=crio: (39.087657545s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-769827
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-769827: exit status 80 (210.317414ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-769827 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-769827-m03 already exists in multinode-769827-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-769827-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (40.39s)

                                                
                                    
x
+
TestScheduledStopUnix (113.82s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-028675 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-028675 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.317147649s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-028675 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-028675 -n scheduled-stop-028675
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-028675 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-028675 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-028675 -n scheduled-stop-028675
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-028675
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-028675 --schedule 15s
E0815 18:19:52.218902   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-028675
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-028675: exit status 7 (64.211319ms)

                                                
                                                
-- stdout --
	scheduled-stop-028675
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-028675 -n scheduled-stop-028675
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-028675 -n scheduled-stop-028675: exit status 7 (60.220605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-028675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-028675
--- PASS: TestScheduledStopUnix (113.82s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (197.45s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1494782562 start -p running-upgrade-708889 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1494782562 start -p running-upgrade-708889 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m58.704782254s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-708889 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-708889 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.143001641s)
helpers_test.go:175: Cleaning up "running-upgrade-708889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-708889
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-708889: (10.95608015s)
--- PASS: TestRunningBinaryUpgrade (197.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-692760 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-692760 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (69.595107ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-692760] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (90.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-692760 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-692760 --driver=kvm2  --container-runtime=crio: (1m30.54137575s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-692760 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (90.77s)

                                                
                                    
x
+
TestPause/serial/Start (133.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-728850 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-728850 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m13.682490128s)
--- PASS: TestPause/serial/Start (133.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-692760 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0815 18:22:30.802334   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-692760 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.397925749s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-692760 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-692760 status -o json: exit status 2 (234.749107ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-692760","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-692760
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-692760: (1.024802527s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (28.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-692760 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0815 18:22:47.734432   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-692760 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.603528101s)
--- PASS: TestNoKubernetes/serial/Start (28.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-692760 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-692760 "sudo systemctl is-active --quiet service kubelet": exit status 1 (185.422658ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (14.980656206s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-692760
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-692760: (1.326079229s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (24.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-692760 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-692760 --driver=kvm2  --container-runtime=crio: (24.403213064s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (24.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-692760 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-692760 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.970946ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-443473 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-443473 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (102.634368ms)

                                                
                                                
-- stdout --
	* [false-443473] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 18:23:57.584560   61189 out.go:345] Setting OutFile to fd 1 ...
	I0815 18:23:57.584716   61189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:23:57.584727   61189 out.go:358] Setting ErrFile to fd 2...
	I0815 18:23:57.584734   61189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 18:23:57.584998   61189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-13013/.minikube/bin
	I0815 18:23:57.585738   61189 out.go:352] Setting JSON to false
	I0815 18:23:57.587035   61189 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7584,"bootTime":1723738654,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0815 18:23:57.587115   61189 start.go:139] virtualization: kvm guest
	I0815 18:23:57.589454   61189 out.go:177] * [false-443473] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0815 18:23:57.590968   61189 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 18:23:57.590981   61189 notify.go:220] Checking for updates...
	I0815 18:23:57.593355   61189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 18:23:57.594731   61189 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-13013/kubeconfig
	I0815 18:23:57.596698   61189 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-13013/.minikube
	I0815 18:23:57.598036   61189 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0815 18:23:57.599348   61189 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 18:23:57.601421   61189 config.go:182] Loaded profile config "force-systemd-env-618999": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0
	I0815 18:23:57.601564   61189 config.go:182] Loaded profile config "kubernetes-upgrade-729203": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0815 18:23:57.601695   61189 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 18:23:57.639016   61189 out.go:177] * Using the kvm2 driver based on user configuration
	I0815 18:23:57.639993   61189 start.go:297] selected driver: kvm2
	I0815 18:23:57.640012   61189 start.go:901] validating driver "kvm2" against <nil>
	I0815 18:23:57.640024   61189 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 18:23:57.641978   61189 out.go:201] 
	W0815 18:23:57.643119   61189 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0815 18:23:57.644420   61189 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-443473 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-443473

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-443473

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-443473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-443473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-443473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-443473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-443473

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-443473

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-443473

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-443473

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-443473

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-443473" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-443473" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-443473

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-443473"

                                                
                                                
----------------------- debugLogs end: false-443473 [took: 2.68001394s] --------------------------------
helpers_test.go:175: Cleaning up "false-443473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-443473
--- PASS: TestNetworkPlugins/group/false (2.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (117.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2363417323 start -p stopped-upgrade-498665 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2363417323 start -p stopped-upgrade-498665 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (54.617253463s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2363417323 -p stopped-upgrade-498665 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2363417323 -p stopped-upgrade-498665 stop: (3.457228757s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-498665 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-498665 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (59.838834105s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (117.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-498665
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-599042 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-599042 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m13.671245652s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (108.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-555028 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
E0815 18:27:47.734125   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-555028 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (1m48.862065688s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (108.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-599042 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [38120fa0-c110-4003-a0a2-ecf726f1a3b6] Pending
helpers_test.go:344: "busybox" [38120fa0-c110-4003-a0a2-ecf726f1a3b6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [38120fa0-c110-4003-a0a2-ecf726f1a3b6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.004414964s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-599042 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-423062 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-423062 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (54.702268432s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-599042 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-599042 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-555028 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c9207c57-a365-4624-a66b-602b1defc62f] Pending
helpers_test.go:344: "busybox" [c9207c57-a365-4624-a66b-602b1defc62f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c9207c57-a365-4624-a66b-602b1defc62f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004824445s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-555028 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-555028 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-555028 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-423062 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c26ca004-1d45-4ab6-ae7d-1e32614dccc0] Pending
helpers_test.go:344: "busybox" [c26ca004-1d45-4ab6-ae7d-1e32614dccc0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c26ca004-1d45-4ab6-ae7d-1e32614dccc0] Running
E0815 18:29:52.218360   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00304951s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-423062 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-423062 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-423062 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (642.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-599042 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-599042 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (10m42.134975506s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-599042 -n no-preload-599042
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (642.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (573.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-555028 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-555028 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (9m32.897266208s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-555028 -n embed-certs-555028
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (573.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (539.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-423062 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-423062 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (8m59.313183955s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-423062 -n default-k8s-diff-port-423062
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (539.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-278865 --alsologtostderr -v=3
E0815 18:32:47.733750   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-278865 --alsologtostderr -v=3: (6.28749387s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-278865 -n old-k8s-version-278865: exit status 7 (60.620941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-278865 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-828957 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-828957 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (47.826608291s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-828957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-828957 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.131505127s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-828957 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-828957 --alsologtostderr -v=3: (7.341533972s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-828957 -n newest-cni-828957
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-828957 -n newest-cni-828957: exit status 7 (64.924571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-828957 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-828957 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-828957 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.0: (36.452397657s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-828957 -n newest-cni-828957
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.70s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-828957 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-828957 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-828957 -n newest-cni-828957
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-828957 -n newest-cni-828957: exit status 2 (247.006658ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-828957 -n newest-cni-828957
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-828957 -n newest-cni-828957: exit status 2 (257.882579ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-828957 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-828957 -n newest-cni-828957
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-828957 -n newest-cni-828957
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m26.372773443s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (90.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m30.259113704s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (90.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (129.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0815 18:57:47.734135   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/functional-773344/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:58:45.391311   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:58:45.397729   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:58:45.409120   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:58:45.430532   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:58:45.472017   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:58:45.554023   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:58:45.716381   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:58:46.038084   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:58:46.679736   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:58:47.961419   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m9.330870111s)
--- PASS: TestNetworkPlugins/group/calico/Start (129.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0815 18:58:55.644991   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m16.326813284s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-443473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-443473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fvgpv" [f5a208e2-78fe-48a2-9279-25821560a500] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0815 18:59:05.886289   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-fvgpv" [f5a208e2-78fe-48a2-9279-25821560a500] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004255636s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bc9zv" [dc5ad389-0895-4742-bd54-d542dbb0433d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007314046s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-443473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-443473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b2lcq" [0860e2ee-0723-4755-821e-5cf287c80493] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b2lcq" [0860e2ee-0723-4755-821e-5cf287c80493] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005341324s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-443473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-443473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0815 18:59:35.299757   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m23.73222868s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (94.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0815 18:59:43.183780   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:59:43.190255   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:59:43.201733   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:59:43.223129   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:59:43.264607   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:59:43.346035   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:59:43.507546   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:59:43.829754   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:59:44.471975   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:59:45.753491   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m34.025912896s)
--- PASS: TestNetworkPlugins/group/flannel/Start (94.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-s8bsn" [7bd411a5-5d12-44b3-957a-d1dbc464e653] Running
E0815 18:59:48.315353   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:59:52.219166   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/addons-973562/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:59:53.437640   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004338433s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-443473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-443473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mbqpj" [abb62d06-d2c7-4fde-9d8b-a66108c5eec7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mbqpj" [abb62d06-d2c7-4fde-9d8b-a66108c5eec7] Running
E0815 19:00:03.679434   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004355107s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-443473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-443473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-443473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mzpdn" [a068bafc-1486-411f-bfc9-6150250ca2cc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0815 19:00:13.142636   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"
E0815 19:00:13.149098   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"
E0815 19:00:13.160437   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"
E0815 19:00:13.182407   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"
E0815 19:00:13.223893   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"
E0815 19:00:13.305767   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"
E0815 19:00:13.467934   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-mzpdn" [a068bafc-1486-411f-bfc9-6150250ca2cc] Running
E0815 19:00:13.789629   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"
E0815 19:00:14.431897   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"
E0815 19:00:15.714155   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"
E0815 19:00:18.275925   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004820885s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-443473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0815 19:00:23.397674   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"
E0815 19:00:24.161076   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
E0815 19:00:33.640038   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/old-k8s-version-278865/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-443473 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m0.410845455s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-443473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-443473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hbzvv" [9e659da1-9bc0-4f61-a0b3-7b3acde83641] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hbzvv" [9e659da1-9bc0-4f61-a0b3-7b3acde83641] Running
E0815 19:01:05.123247   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/default-k8s-diff-port-423062/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00456005s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-443473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gnrbf" [5783d0d3-dcad-4939-80ae-8047ee678717] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005131747s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-443473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-443473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pnsgm" [14619b2b-d728-495b-9917-3729952da4fe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pnsgm" [14619b2b-d728-495b-9917-3729952da4fe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003987613s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-443473 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-443473 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8svf7" [874ef90c-7688-4bdd-8c2a-9b3418a09327] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8svf7" [874ef90c-7688-4bdd-8c2a-9b3418a09327] Running
E0815 19:01:29.252080   20219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-13013/.minikube/profiles/no-preload-599042/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004149019s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-443473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-443473 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-443473 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (37/312)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.0/cached-images 0
15 TestDownloadOnly/v1.31.0/binaries 0
16 TestDownloadOnly/v1.31.0/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
38 TestAddons/parallel/Olm 0
48 TestDockerFlags 0
51 TestDockerEnvContainerd 0
53 TestHyperKitDriverInstallOrUpdate 0
54 TestHyperkitDriverSkipUpgrade 0
105 TestFunctional/parallel/DockerEnv 0
106 TestFunctional/parallel/PodmanEnv 0
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
143 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
154 TestGvisorAddon 0
176 TestImageBuild 0
203 TestKicCustomNetwork 0
204 TestKicExistingNetwork 0
205 TestKicCustomSubnet 0
206 TestKicStaticIP 0
238 TestChangeNoneUser 0
241 TestScheduledStopWindows 0
243 TestSkaffold 0
245 TestInsufficientStorage 0
249 TestMissingContainerUpgrade 0
258 TestStartStop/group/disable-driver-mounts 0.14
271 TestNetworkPlugins/group/kubenet 2.84
280 TestNetworkPlugins/group/cilium 3.24
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:879: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-698209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-698209
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-443473 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-443473

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-443473

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-443473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-443473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-443473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-443473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-443473

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-443473

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-443473

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-443473

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-443473

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-443473" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-443473" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-443473

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-443473"

                                                
                                                
----------------------- debugLogs end: kubenet-443473 [took: 2.700767006s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-443473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-443473
--- SKIP: TestNetworkPlugins/group/kubenet (2.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-443473 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-443473" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-443473

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-443473" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-443473"

                                                
                                                
----------------------- debugLogs end: cilium-443473 [took: 3.097657399s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-443473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-443473
--- SKIP: TestNetworkPlugins/group/cilium (3.24s)

                                                
                                    
Copied to clipboard